Science.gov

Sample records for algorithm correctly classified

  1. Self-correcting 100-font classifier

    NASA Astrophysics Data System (ADS)

    Baird, Henry S.; Nagy, George

    1994-03-01

    We have developed a practical scheme to take advantage of local typeface homogeneity to improve the accuracy of a character classifier. Given a polyfont classifier which is capable of recognizing any of 100 typefaces moderately well, our method allows it to specialize itself automatically to the single -- but otherwise unknown -- typeface it is reading. Essentially, the classifier retrains itself after examining some of the images, guided at first by the preset classification boundaries of the given classifier, and later by the behavior of the retrained classifier. Experimental trials on 6.4 M pseudo-randomly distorted images show that the method improves on 95 of the 100 typefaces. It reduces the error rate by a factor of 2.5, averaged over 100 typefaces, when applied to an alphabet of 80 ASCII characters printed at ten point and digitized at 300 pixels/inch. This self-correcting method complements, and does not hinder, other methods for improving OCR accuracy, such as linguistic contextual analysis.

  2. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  3. Learning algorithms for stack filter classifiers

    SciTech Connect

    Porter, Reid B; Hush, Don; Zimmer, Beate G

    2009-01-01

    Stack Filters define a large class of increasing filter that is used widely in image and signal processing. The motivations for using an increasing filter instead of an unconstrained filter have been described as: (1) fast and efficient implementation, (2) the relationship to mathematical morphology and (3) more precise estimation with finite sample data. This last motivation is related to methods developed in machine learning and the relationship was explored in an earlier paper. In this paper we investigate this relationship by applying Stack Filters directly to classification problems. This provides a new perspective on how monotonicity constraints can help control estimation and approximation errors, and also suggests several new learning algorithms for Boolean function classifiers when they are applied to real-valued inputs.

  4. Error minimizing algorithms for nearest eighbor classifiers

    SciTech Connect

    Porter, Reid B; Hush, Don; Zimmer, G. Beate

    2011-01-03

    Stack Filters define a large class of discrete nonlinear filter first introd uced in image and signal processing for noise removal. In recent years we have suggested their application to classification problems, and investigated their relationship to other types of discrete classifiers such as Decision Trees. In this paper we focus on a continuous domain version of Stack Filter Classifiers which we call Ordered Hypothesis Machines (OHM), and investigate their relationship to Nearest Neighbor classifiers. We show that OHM classifiers provide a novel framework in which to train Nearest Neighbor type classifiers by minimizing empirical error based loss functions. We use the framework to investigate a new cost sensitive loss function that allows us to train a Nearest Neighbor type classifier for low false alarm rate applications. We report results on both synthetic data and real-world image data.

  5. Classifying scaled and rotated textures using a region-matched algorithm

    NASA Astrophysics Data System (ADS)

    Yao, Chih-Chia; Chen, Yu-Tin

    2012-07-01

    A novel method to correct texture variations resulting from scale magnification, narrowing caused by cropping into the original size, or spatial rotation is discussed. The variations usually occur in images captured by a camera using different focal lengths. A representative region-matched algorithm is developed to improve texture classification after magnification, narrowing, and spatial rotation. By using a minimum ellipse, a representative region-matched algorithm encloses a specific region extracted by the J-image segmentation algorithm. After translating the coordinates, the equation of an ellipse in the rotated texture can be formulated as that of an ellipse in the original texture. The rotated invariant property of ellipse provides an efficient method to identify the rotated texture. Additionally, the scale-variant representative region can be classified by adopting scale-invariant parameters. Moreover, a hybrid texture filter is developed. In the hybrid texture filter, the scheme of texture feature extraction includes the Gabor wavelet and the representative region-matched algorithm. Support vector machines are introduced as the classifier. The proposed hybrid texture filter performs excellently with respect to classifying both the stochastic and structural textures. Furthermore, experimental results demonstrate that the proposed algorithm outperforms conventional design algorithms.

  6. Classifying Volcanic Activity Using an Empirical Decision Making Algorithm

    NASA Astrophysics Data System (ADS)

    Junek, W. N.; Jones, W. L.; Woods, M. T.

    2012-12-01

    Detection and classification of developing volcanic activity is vital to eruption forecasting. Timely information regarding an impending eruption would aid civil authorities in determining the proper response to a developing crisis. In this presentation, volcanic activity is characterized using an event tree classifier and a suite of empirical statistical models derived through logistic regression. Forecasts are reported in terms of the United States Geological Survey (USGS) volcano alert level system. The algorithm employs multidisciplinary data (e.g., seismic, GPS, InSAR) acquired by various volcano monitoring systems and source modeling information to forecast the likelihood that an eruption, with a volcanic explosivity index (VEI) > 1, will occur within a quantitatively constrained area. Logistic models are constructed from a sparse and geographically diverse dataset assembled from a collection of historic volcanic unrest episodes. Bootstrapping techniques are applied to the training data to allow for the estimation of robust logistic model coefficients. Cross validation produced a series of receiver operating characteristic (ROC) curves with areas ranging between 0.78-0.81, which indicates the algorithm has good predictive capabilities. The ROC curves also allowed for the determination of a false positive rate and optimum detection for each stage of the algorithm. Forecasts for historic volcanic unrest episodes in North America and Iceland were computed and are consistent with the actual outcome of the events.

  7. Karyometry: Correction algorithm for differences in staining

    PubMed Central

    Bartels, Peter H.; Bartels, Hubert G.; Alberts, David S.

    2014-01-01

    Objectives An algorithm is described which allows the correction of differences in staining of histopathologic sections while preserving chromatin texture. Methods In order to preserve the texture of the nuclear chromatin in the corrected digital imagery, it is necessary to correct the images pixel for pixel. This is accomplished by mapping each pixel’s value onto the cumulative frequency distribution of the data set to which the image belongs, to transfer to the cumulative frequency distribution of the data set serving as standard, and to project the intersection down onto the pixel optical density scale for the corrected value. Results Feature values in the corrected imagery, for the majority of features used in karyometry, are between less than one percent and a few percent of the feature values in standard imagery. For some higher order statistical features involving multiple pixels, sensitivity to a shift in the cumulative frequency distribution may exist, and a secondary small correction by a factor may be required. Conclusions The correction algorithm allows the elimination of the effects of small staining differences on karyometric analysis. PMID:19402382

  8. Genetic algorithms and classifier systems: Foundations and future directions

    SciTech Connect

    Holland, J.H.

    1987-01-01

    Theoretical questions about classifier systems, with rare exceptions, apply equally to other adaptive nonlinear networks (ANNs) such as the connectionist models of cognitive psychology, the immune system, economic systems, ecologies, and genetic systems. This paper discusses pervasive properties of ANNs and the kinds of mathematics relevant to questions about these properties. It discusses relevant functional extensions of the basic classifier system and extensions of the extant mathematical theory. An appendix briefly reviews some of the key theorems about classifier systems. 6 refs.

  9. Spectral areas and ratios classifier algorithm for pancreatic tissue classification using optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Chandra, Malavika; Scheiman, James; Simeone, Diane; McKenna, Barbara; Purdy, Julianne; Mycek, Mary-Ann

    2010-01-01

    Pancreatic adenocarcinoma is one of the leading causes of cancer death, in part because of the inability of current diagnostic methods to reliably detect early-stage disease. We present the first assessment of the diagnostic accuracy of algorithms developed for pancreatic tissue classification using data from fiber optic probe-based bimodal optical spectroscopy, a real-time approach that would be compatible with minimally invasive diagnostic procedures for early cancer detection in the pancreas. A total of 96 fluorescence and 96 reflectance spectra are considered from 50 freshly excised tissue sites-including human pancreatic adenocarcinoma, chronic pancreatitis (inflammation), and normal tissues-on nine patients. Classification algorithms using linear discriminant analysis are developed to distinguish among tissues, and leave-one-out cross-validation is employed to assess the classifiers' performance. The spectral areas and ratios classifier (SpARC) algorithm employs a combination of reflectance and fluorescence data and has the best performance, with sensitivity, specificity, negative predictive value, and positive predictive value for correctly identifying adenocarcinoma being 85, 89, 92, and 80%, respectively.

  10. An Evaluation of Information Criteria Use for Correct Cross-Classified Random Effects Model Selection

    ERIC Educational Resources Information Center

    Beretvas, S. Natasha; Murphy, Daniel L.

    2013-01-01

    The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…

  11. TIRS stray light correction: algorithms and performance

    NASA Astrophysics Data System (ADS)

    Gerace, Aaron; Montanaro, Matthew; Beckmann, Tim; Tyrrell, Kaitlin; Cozzo, Alexandra; Carney, Trevor; Ngan, Vicki

    2015-09-01

    The Thermal Infrared Sensor (TIRS) onboard Landsat 8 was tasked with continuing thermal band measurements of the Earth as part of the Landsat program. From first light in early 2013, there were obvious indications that stray light was contaminating the thermal image data collected from the instrument. Traditional calibration techniques did not perform adequately as non-uniform banding was evident in the corrected data and error in absolute estimates of temperature over trusted buoys sites varied seasonally and, in worst cases, exceeded 9 K error. The development of an operational technique to remove the effects of the stray light has become a high priority to enhance the utility of the TIRS data. This paper introduces the current algorithm being tested by Landsat's calibration and validation team to remove stray light from TIRS image data. The integration of the algorithm into the EROS test system is discussed with strategies for operationalizing the method emphasized. Techniques for assessing the methodologies used are presented and potential refinements to the algorithm are suggested. Initial results indicate that the proposed algorithm significantly removes stray light artifacts from the image data. Specifically, visual and quantitative evidence suggests that the algorithm practically eliminates banding in the image data. Additionally, the seasonal variation in absolute errors is flattened and, in the worst case, errors of over 9 K are reduced to within 2 K. Future work focuses on refining the algorithm based on these findings and applying traditional calibration techniques to enhance the final image product.

  12. An iterative subaperture position correction algorithm

    NASA Astrophysics Data System (ADS)

    Lo, Weng-Hou; Lin, Po-Chih; Chen, Yi-Chun

    2015-08-01

    The subaperture stitching interferometry is a technique suitable for testing high numerical-aperture optics, large-diameter spherical lenses and aspheric optics. In the stitching process, each subaperture has to be placed at its correct position in a global coordinate, and the positioning precision would affect the accuracy of stitching result. However, the mechanical limitations in the alignment process as well as vibrations during the measurement would induce inevitable subaperture position uncertainties. In our previous study, a rotational scanning subaperture stitching interferometer has been constructed. This paper provides an iterative algorithm to correct the subaperture position without altering the interferometer configuration. Each subaperture is first placed at its geometric position estimated according to the F number of reference lens, the measurement zenithal angle and the number of pixels along the width of subaperture. By using the concept of differentiation, a shift compensator along the radial direction of the global coordinate is added into the stitching algorithm. The algorithm includes two kinds of compensators: one for the geometric null with four compensators of piston, two directional tilts and defocus, and the other for the position correction with the shift compensator. These compensators are computed iteratively to minimize the phase differences in the overlapped regions of subapertures in a least-squares sense. The simulation results demonstrate that the proposed method works to the position accuracy of 0.001 pixels for both the single-ring and multiple-ring configurations. Experimental verifications with the single-ring and multiple-ring data also show the effectiveness of the algorithm.

  13. Atmospheric Correction Algorithm for Hyperspectral Imagery

    SciTech Connect

    R. J. Pollina

    1999-09-01

    In December 1997, the US Department of Energy (DOE) established a Center of Excellence (Hyperspectral-Multispectral Algorithm Research Center, HyMARC) for promoting the research and development of algorithms to exploit spectral imagery. This center is located at the DOE Remote Sensing Laboratory in Las Vegas, Nevada, and is operated for the DOE by Bechtel Nevada. This paper presents the results to date of a research project begun at the center during 1998 to investigate the correction of hyperspectral data for atmospheric aerosols. Results of a project conducted by the Rochester Institute of Technology to define, implement, and test procedures for absolute calibration and correction of hyperspectral data to absolute units of high spectral resolution imagery will be presented. Hybrid techniques for atmospheric correction using image or spectral scene data coupled through radiative propagation models will be specifically addressed. Results of this effort to analyze HYDICE sensor data will be included. Preliminary results based on studying the performance of standard routines, such as Atmospheric Pre-corrected Differential Absorption and Nonlinear Least Squares Spectral Fit, in retrieving reflectance spectra show overall reflectance retrieval errors of approximately one to two reflectance units in the 0.4- to 2.5-micron-wavelength region (outside of the absorption features). These results are based on HYDICE sensor data collected from the Southern Great Plains Atmospheric Radiation Measurement site during overflights conducted in July of 1997. Results of an upgrade made in the model-based atmospheric correction techniques, which take advantage of updates made to the moderate resolution atmospheric transmittance model (MODTRAN 4.0) software, will also be presented. Data will be shown to demonstrate how the reflectance retrieval in the shorter wavelengths of the blue-green region will be improved because of enhanced modeling of multiple scattering effects.

  14. Algorithmic Error Correction of Impedance Measuring Sensors

    PubMed Central

    Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira

    2009-01-01

    This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177

  15. Classifying epilepsy diseases using artificial neural networks and genetic algorithm.

    PubMed

    Koçer, Sabri; Canal, M Rahmi

    2011-08-01

    In this study, FFT analysis is applied to the EEG signals of the normal and patient subjects and the obtained FFT coefficients are used as inputs in Artificial Neural Network (ANN). The differences shown by the non-stationary random signals such as EEG signals in cases of health and sickness (epilepsy) were evaluated and tried to be analyzed under computer-supported conditions by using artificial neural networks. Multi-Layer Perceptron (MLP) architecture is used Levenberg-Marquardt (LM), Quickprop (QP), Delta-bar delta (DBD), Momentum and Conjugate gradient (CG) learning algorithms, and the best performance was tried to be attained by ensuring the optimization with the use of genetic algorithms of the weights, learning rates, neuron numbers of hidden layer in the training process. This study shows that the artificial neural network increases the classification performance using genetic algorithm.

  16. Combining classifiers generated by multi-gene genetic programming for protein fold recognition using genetic algorithm.

    PubMed

    Bardsiri, Mahshid Khatibi; Eftekhari, Mahdi; Mousavi, Reza

    2015-01-01

    In this study the problem of protein fold recognition, that is a classification task, is solved via a hybrid of evolutionary algorithms namely multi-gene Genetic Programming (GP) and Genetic Algorithm (GA). Our proposed method consists of two main stages and is performed on three datasets taken from the literature. Each dataset contains different feature groups and classes. In the first step, multi-gene GP is used for producing binary classifiers based on various feature groups for each class. Then, different classifiers obtained for each class are combined via weighted voting so that the weights are determined through GA. At the end of the first step, there is a separate binary classifier for each class. In the second stage, the obtained binary classifiers are combined via GA weighting in order to generate the overall classifier. The final obtained classifier is superior to the previous works found in the literature in terms of classification accuracy.

  17. [An Algorithm for Correcting Fetal Heart Rate Baseline].

    PubMed

    Li, Xiaodong; Lu, Yaosheng

    2015-10-01

    Fetal heart rate (FHR) baseline estimation is of significance for the computerized analysis of fetal heart rate and the assessment of fetal state. In our work, a fetal heart rate baseline correction algorithm was presented to make the existing baseline more accurate and fit to the tracings. Firstly, the deviation of the existing FHR baseline was found and corrected. And then a new baseline was obtained finally after treatment with some smoothing methods. To assess the performance of FHR baseline correction algorithm, a new FHR baseline estimation algorithm that combined baseline estimation algorithm and the baseline correction algorithm was compared with two existing FHR baseline estimation algorithms. The results showed that the new FHR baseline estimation algorithm did well in both accuracy and efficiency. And the results also proved the effectiveness of the FHR baseline correction algorithm.

  18. A unified classifier for robust face recognition based on combining multiple subspace algorithms

    NASA Astrophysics Data System (ADS)

    Ijaz Bajwa, Usama; Ahmad Taj, Imtiaz; Waqas Anwar, Muhammad

    2012-10-01

    Face recognition being the fastest growing biometric technology has expanded manifold in the last few years. Various new algorithms and commercial systems have been proposed and developed. However, none of the proposed or developed algorithm is a complete solution because it may work very well on one set of images with say illumination changes but may not work properly on another set of image variations like expression variations. This study is motivated by the fact that any single classifier cannot claim to show generally better performance against all facial image variations. To overcome this shortcoming and achieve generality, combining several classifiers using various strategies has been studied extensively also incorporating the question of suitability of any classifier for this task. The study is based on the outcome of a comprehensive comparative analysis conducted on a combination of six subspace extraction algorithms and four distance metrics on three facial databases. The analysis leads to the selection of the most suitable classifiers which performs better on one task or the other. These classifiers are then combined together onto an ensemble classifier by two different strategies of weighted sum and re-ranking. The results of the ensemble classifier show that these strategies can be effectively used to construct a single classifier that can successfully handle varying facial image conditions of illumination, aging and facial expressions.

  19. Comparison of Genetic Algorithm, Particle Swarm Optimization and Biogeography-based Optimization for Feature Selection to Classify Clusters of Microcalcifications

    NASA Astrophysics Data System (ADS)

    Khehra, Baljit Singh; Pharwaha, Amar Partap Singh

    2016-06-01

    Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.

  20. Design of a high-sensitivity classifier based on a genetic algorithm: application to computer-aided diagnosis

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Chan, Heang-Ping; Petrick, Nicholas; Helvie, Mark A.; Goodsitt, Mitchell M.

    1998-10-01

    A genetic algorithm (GA) based feature selection method was developed for the design of high-sensitivity classifiers, which were tailored to yield high sensitivity with high specificity. The fitness function of the GA was based on the receiver operating characteristic (ROC) partial area index, which is defined as the average specificity above a given sensitivity threshold. The designed GA evolved towards the selection of feature combinations which yielded high specificity in the high-sensitivity region of the ROC curve, regardless of the performance at low sensitivity. This is a desirable quality of a classifier used for breast lesion characterization, since the focus in breast lesion characterization is to diagnose correctly as many benign lesions as possible without missing malignancies. The high-sensitivity classifier, formulated as the Fisher's linear discriminant using GA-selected feature variables, was employed to classify 255 biopsy-proven mammographic masses as malignant or benign. The mammograms were digitized at a pixel size of mm, and regions of interest (ROIs) containing the biopsied masses were extracted by an experienced radiologist. A recently developed image transformation technique, referred to as the rubber-band straightening transform, was applied to the ROIs. Texture features extracted from the spatial grey-level dependence and run-length statistics matrices of the transformed ROIs were used to distinguish malignant and benign masses. The classification accuracy of the high-sensitivity classifier was compared with that of linear discriminant analysis with stepwise feature selection . With proper GA training, the ROC partial area of the high-sensitivity classifier above a true-positive fraction of 0.95 was significantly larger than that of

  1. A microwave radiometer weather-correcting sea ice algorithm

    NASA Technical Reports Server (NTRS)

    Walters, J. M.; Ruf, C.; Swift, C. T.

    1987-01-01

    A new algorithm for estimating the proportions of the multiyear and first-year sea ice types under variable atmospheric and sea surface conditions is presented, which uses all six channels of the SMMR. The algorithm is specifically tuned to derive sea ice parameters while accepting error in the auxiliary parameters of surface temperature, ocean surface wind speed, atmospheric water vapor, and cloud liquid water content. Not only does the algorithm naturally correct for changes in these weather conditions, but it retrieves sea ice parameters to the extent that gross errors in atmospheric conditions propagate only small errors into the sea ice retrievals. A preliminary evaluation indicates that the weather-correcting algorithm provides a better data product than the 'UMass-AES' algorithm, whose quality has been cross checked with independent surface observations. The algorithm performs best when the sea ice concentration is less than 20 percent.

  2. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  3. An efficient fitness function in genetic algorithm classifier for Landuse recognition on satellite images.

    PubMed

    Yang, Ming-Der; Yang, Yeh-Fen; Su, Tung-Ching; Huang, Kai-Siang

    2014-01-01

    Genetic algorithm (GA) is designed to search the optimal solution via weeding out the worse gene strings based on a fitness function. GA had demonstrated effectiveness in solving the problems of unsupervised image classification, one of the optimization problems in a large domain. Many indices or hybrid algorithms as a fitness function in a GA classifier are built to improve the classification accuracy. This paper proposes a new index, DBFCMI, by integrating two common indices, DBI and FCMI, in a GA classifier to improve the accuracy and robustness of classification. For the purpose of testing and verifying DBFCMI, well-known indices such as DBI, FCMI, and PASI are employed as well for comparison. A SPOT-5 satellite image in a partial watershed of Shihmen reservoir is adopted as the examined material for landuse classification. As a result, DBFCMI acquires higher overall accuracy and robustness than the rest indices in unsupervised classification.

  4. A novel algorithm for simplification of complex gene classifiers in cancer.

    PubMed

    Wilson, Raphael A; Teng, Ling; Bachmeyer, Karen M; Bissonnette, Mei Lin Z; Husain, Aliya N; Parham, David M; Triche, Timothy J; Wing, Michele R; Gastier-Foster, Julie M; Barr, Frederic G; Hawkins, Douglas S; Anderson, James R; Skapek, Stephen X; Volchenboum, Samuel L

    2013-09-15

    The clinical application of complex molecular classifiers as diagnostic or prognostic tools has been limited by the time and cost needed to apply them to patients. Using an existing 50-gene expression signature known to separate two molecular subtypes of the pediatric cancer rhabdomyosarcoma, we show that an exhaustive iterative search algorithm can distill this complex classifier down to two or three features with equal discrimination. We validated the two-gene signatures using three separate and distinct datasets, including one that uses degraded RNA extracted from formalin-fixed, paraffin-embedded material. Finally, to show the generalizability of our algorithm, we applied it to a lung cancer dataset to find minimal gene signatures that can distinguish survival. Our approach can easily be generalized and coupled to existing technical platforms to facilitate the discovery of simplified signatures that are ready for routine clinical use.

  5. An Efficient Fitness Function in Genetic Algorithm Classifier for Landuse Recognition on Satellite Images

    PubMed Central

    Yang, Yeh-Fen; Su, Tung-Ching; Huang, Kai-Siang

    2014-01-01

    Genetic algorithm (GA) is designed to search the optimal solution via weeding out the worse gene strings based on a fitness function. GA had demonstrated effectiveness in solving the problems of unsupervised image classification, one of the optimization problems in a large domain. Many indices or hybrid algorithms as a fitness function in a GA classifier are built to improve the classification accuracy. This paper proposes a new index, DBFCMI, by integrating two common indices, DBI and FCMI, in a GA classifier to improve the accuracy and robustness of classification. For the purpose of testing and verifying DBFCMI, well-known indices such as DBI, FCMI, and PASI are employed as well for comparison. A SPOT-5 satellite image in a partial watershed of Shihmen reservoir is adopted as the examined material for landuse classification. As a result, DBFCMI acquires higher overall accuracy and robustness than the rest indices in unsupervised classification. PMID:24701151

  6. Genetic algorithm for chromaticity correction in diffraction limited storage rings

    NASA Astrophysics Data System (ADS)

    Ehrlichman, M. P.

    2016-04-01

    A multiobjective genetic algorithm is developed for optimizing nonlinearities in diffraction limited storage rings. This algorithm determines sextupole and octupole strengths for chromaticity correction that deliver optimized dynamic aperture and beam lifetime. The algorithm makes use of dominance constraints to breed desirable properties into the early generations. The momentum aperture is optimized indirectly by constraining the chromatic tune footprint and optimizing the off-energy dynamic aperture. The result is an effective and computationally efficient technique for correcting chromaticity in a storage ring while maintaining optimal dynamic aperture and beam lifetime.

  7. Classifying spatially heterogeneous wetland communities using machine learning algorithms and spectral and textural features.

    PubMed

    Szantoi, Zoltan; Escobedo, Francisco J; Abd-Elrahman, Amr; Pearlstine, Leonard; Dewitt, Bon; Smith, Scot

    2015-05-01

    Mapping of wetlands (marsh vs. swamp vs. upland) is a common remote sensing application.Yet, discriminating between similar freshwater communities such as graminoid/sedge fromremotely sensed imagery is more difficult. Most of this activity has been performed using medium to low resolution imagery. There are only a few studies using highspatial resolutionimagery and machine learning image classification algorithms for mapping heterogeneouswetland plantcommunities. This study addresses this void by analyzing whether machine learning classifierssuch as decisiontrees (DT) and artificial neural networks (ANN) can accurately classify graminoid/sedgecommunities usinghigh resolution aerial imagery and image texture data in the Everglades National Park, Florida.In addition tospectral bands, the normalized difference vegetation index, and first- and second-order texturefeatures derivedfrom the near-infrared band were analyzed. Classifier accuracies were assessed using confusiontablesand the calculated kappa coefficients of the resulting maps. The results indicated that an ANN(multilayerperceptron based on backpropagation) algorithm produced a statistically significantly higheraccuracy(82.04%) than the DT (QUEST) algorithm (80.48%) or the maximum likelihood (80.56%)classifier (α<0.05). Findings show that using multiple window sizes provided the best results. First-ordertexture featuresalso provided computational advantages and results that were not significantly different fromthose usingsecond-order texture features.

  8. Adaptive bad pixel correction algorithm for IRFPA based on PCNN

    NASA Astrophysics Data System (ADS)

    Leng, Hanbing; Zhou, Zuofeng; Cao, Jianzhong; Yi, Bo; Yan, Aqi; Zhang, Jian

    2013-10-01

    Bad pixels and response non-uniformity are the primary obstacles when IRFPA is used in different thermal imaging systems. The bad pixels of IRFPA include fixed bad pixels and random bad pixels. The former is caused by material or manufacture defect and their positions are always fixed, the latter is caused by temperature drift and their positions are always changing. Traditional radiometric calibration-based bad pixel detection and compensation algorithm is only valid to the fixed bad pixels. Scene-based bad pixel correction algorithm is the effective way to eliminate these two kinds of bad pixels. Currently, the most used scene-based bad pixel correction algorithm is based on adaptive median filter (AMF). In this algorithm, bad pixels are regarded as image noise and then be replaced by filtered value. However, missed correction and false correction often happens when AMF is used to handle complex infrared scenes. To solve this problem, a new adaptive bad pixel correction algorithm based on pulse coupled neural networks (PCNN) is proposed. Potential bad pixels are detected by PCNN in the first step, then image sequences are used periodically to confirm the real bad pixels and exclude the false one, finally bad pixels are replaced by the filtered result. With the real infrared images obtained from a camera, the experiment results show the effectiveness of the proposed algorithm.

  9. Algorithmic scatter correction in dual-energy digital mammography

    SciTech Connect

    Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.; Lau, Beverly A.; Chan, Suk-tak; Zhang, Lei

    2013-11-15

    Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In this paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of

  10. Algorithms for Image Analysis and Combination of Pattern Classifiers with Application to Medical Diagnosis

    NASA Astrophysics Data System (ADS)

    Georgiou, Harris

    2009-10-01

    Medical Informatics and the application of modern signal processing in the assistance of the diagnostic process in medical imaging is one of the more recent and active research areas today. This thesis addresses a variety of issues related to the general problem of medical image analysis, specifically in mammography, and presents a series of algorithms and design approaches for all the intermediate levels of a modern system for computer-aided diagnosis (CAD). The diagnostic problem is analyzed with a systematic approach, first defining the imaging characteristics and features that are relevant to probable pathology in mammo-grams. Next, these features are quantified and fused into new, integrated radio-logical systems that exhibit embedded digital signal processing, in order to improve the final result and minimize the radiological dose for the patient. In a higher level, special algorithms are designed for detecting and encoding these clinically interest-ing imaging features, in order to be used as input to advanced pattern classifiers and machine learning models. Finally, these approaches are extended in multi-classifier models under the scope of Game Theory and optimum collective deci-sion, in order to produce efficient solutions for combining classifiers with minimum computational costs for advanced diagnostic systems. The material covered in this thesis is related to a total of 18 published papers, 6 in scientific journals and 12 in international conferences.

  11. A burst-correcting algorithm for Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Chen, J.; Owsley, P.

    1990-01-01

    The Bose, Chaudhuri, and Hocquenghem (BCH) codes form a large class of powerful error-correcting cyclic codes. Among the non-binary BCH codes, the most important subclass is the Reed Solomon (RS) codes. Reed Solomon codes have the ability to correct random and burst errors. It is well known that an (n,k) RS code can correct up to (n-k)/2 random errors. When burst errors are involved, the error correcting ability of the RS code can be increased beyond (n-k)/2. It has previously been show that RS codes can reliably correct burst errors of length greater than (n-k)/2. In this paper, a new decoding algorithm is given which can also correct a burst error of length greater than (n-k)/2.

  12. Using Hierarchical Time Series Clustering Algorithm and Wavelet Classifier for Biometric Voice Classification

    PubMed Central

    Fong, Simon

    2012-01-01

    Voice biometrics has a long history in biosecurity applications such as verification and identification based on characteristics of the human voice. The other application called voice classification which has its important role in grouping unlabelled voice samples, however, has not been widely studied in research. Lately voice classification is found useful in phone monitoring, classifying speakers' gender, ethnicity and emotion states, and so forth. In this paper, a collection of computational algorithms are proposed to support voice classification; the algorithms are a combination of hierarchical clustering, dynamic time wrap transform, discrete wavelet transform, and decision tree. The proposed algorithms are relatively more transparent and interpretable than the existing ones, though many techniques such as Artificial Neural Networks, Support Vector Machine, and Hidden Markov Model (which inherently function like a black box) have been applied for voice verification and voice identification. Two datasets, one that is generated synthetically and the other one empirically collected from past voice recognition experiment, are used to verify and demonstrate the effectiveness of our proposed voice classification algorithm. PMID:22619492

  13. Algorithm for Atmospheric Corrections of Aircraft and Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Fraser, Robert S.; Kaufman, Yoram J.; Ferrare, Richard A.; Mattoo, Shana

    1989-01-01

    A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 micron. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.

  14. Algorithm for atmospheric corrections of aircraft and satellite imagery

    NASA Technical Reports Server (NTRS)

    Fraser, R. S.; Ferrare, R. A.; Kaufman, Y. J.; Markham, B. L.; Mattoo, S.

    1992-01-01

    A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 microns. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.

  15. Design of the OMPS limb sensor correction algorithm

    NASA Astrophysics Data System (ADS)

    Jaross, Glen; McPeters, Richard; Seftor, Colin; Kowitt, Mark

    The Sensor Data Records (SDR) for the Ozone Mapping and Profiler Suite (OMPS) on NPOESS (National Polar-orbiting Operational Environmental Satellite System) contains geolocated and calibrated radiances, and are similar to the Level 1 data of NASA Earth Observing System and other programs. The SDR algorithms (one for each of the 3 OMPS focal planes) are the processes by which the Raw Data Records (RDR) from the OMPS sensors are converted into the records that contain all data necessary for ozone retrievals. Consequently, the algorithms must correct and calibrate Earth signals, geolocate the data, and identify and ingest collocated ancillary data. As with other limb sensors, ozone profile retrievals are relatively insensitive to calibration errors due to the use of altitude normalization and wavelength pairing. But the profile retrievals as they pertain to OMPS are not immune from sensor changes. In particular, the OMPS Limb sensor images an altitude range of > 100 km and a spectral range of 290-1000 nm on its detector. Uncorrected sensor degradation and spectral registration drifts can lead to changes in the measured radiance profile, which in turn affects the ozone trend measurement. Since OMPS is intended for long-term monitoring, sensor calibration is a specific concern. The calibration is maintained via the ground data processing. This means that all sensor calibration data, including direct solar measurements, are brought down in the raw data and processed separately by the SDR algorithms. One of the sensor corrections performed by the algorithm is the correction for stray light. The imaging spectrometer and the unique focal plane design of OMPS makes these corrections particularly challenging and important. Following an overview of the algorithm flow, we will briefly describe the sensor stray light characterization and the correction approach used in the code.

  16. A registration based nonuniformity correction algorithm for infrared line scanner

    NASA Astrophysics Data System (ADS)

    Liu, Zhe; Ma, Yong; Huang, Jun; Fan, Fan; Ma, Jiayi

    2016-05-01

    A scene-based algorithm is developed for nonuniformity correction in focal plane of line scanning infrared imaging systems (LSIR) based on registration. By utilizing the 2D shift between consecutive frames, an implicit scheme is proposed to determine correction coefficients. All nonuniform biases are corrected to the same designated value, without estimating and removing biases explicitly, permitting quick computation for high-quality nonuniformity correction. Firstly, scene motion is estimated by image registration and consecutive frames exhibiting required 2D subpixel shift are collected. Secondly, we retrieve the difference matrix of adjacent biases by utilizing the 2D shift between consecutive frames. Thirdly, we perform specified elementary transformations and corresponding cumulative sums to the difference matrix to obtain a bias compensator. This bias compensator converts nonuniform biases to a designated detector's bias. Finally, based on the different bias compensators obtained from several frame pairs, we calculate an averaged bias compensator for nonuniformity correction with less error. Quantitative comparisons with other nonuniformity correction methods demonstrate that the proposed algorithm achieves better fixed-pattern noise reduction with low computational complexity.

  17. A new training algorithm using artificial neural networks to classify gender-specific dynamic gait patterns.

    PubMed

    Andrade, Andre; Costa, Marcelo; Paolucci, Leopoldo; Braga, Antônio; Pires, Flavio; Ugrinowitsch, Herbert; Menzel, Hans-Joachim

    2015-01-01

    The aim of this study was to present a new training algorithm using artificial neural networks called multi-objective least absolute shrinkage and selection operator (MOBJ-LASSO) applied to the classification of dynamic gait patterns. The movement pattern is identified by 20 characteristics from the three components of the ground reaction force which are used as input information for the neural networks in gender-specific gait classification. The classification performance between MOBJ-LASSO (97.4%) and multi-objective algorithm (MOBJ) (97.1%) is similar, but the MOBJ-LASSO algorithm achieved more improved results than the MOBJ because it is able to eliminate the inputs and automatically select the parameters of the neural network. Thus, it is an effective tool for data mining using neural networks. From 20 inputs used for training, MOBJ-LASSO selected the first and second peaks of the vertical force and the force peak in the antero-posterior direction as the variables that classify the gait patterns of the different genders.

  18. EC: an efficient error correction algorithm for short reads

    PubMed Central

    2015-01-01

    Background In highly parallel next-generation sequencing (NGS) techniques millions to billions of short reads are produced from a genomic sequence in a single run. Due to the limitation of the NGS technologies, there could be errors in the reads. The error rate of the reads can be reduced with trimming and by correcting the erroneous bases of the reads. It helps to achieve high quality data and the computational complexity of many biological applications will be greatly reduced if the reads are first corrected. We have developed a novel error correction algorithm called EC and compared it with four other state-of-the-art algorithms using both real and simulated sequencing reads. Results We have done extensive and rigorous experiments that reveal that EC is indeed an effective, scalable, and efficient error correction tool. Real reads that we have employed in our performance evaluation are Illumina-generated short reads of various lengths. Six experimental datasets we have utilized are taken from sequence and read archive (SRA) at NCBI. The simulated reads are obtained by picking substrings from random positions of reference genomes. To introduce errors, some of the bases of the simulated reads are changed to other bases with some probabilities. Conclusions Error correction is a vital problem in biology especially for NGS data. In this paper we present a novel algorithm, called Error Corrector (EC), for correcting substitution errors in biological sequencing reads. We plan to investigate the possibility of employing the techniques introduced in this research paper to handle insertion and deletion errors also. Software availability The implementation is freely available for non-commercial purposes. It can be downloaded from: http://engr.uconn.edu/~rajasek/EC.zip. PMID:26678663

  19. A supervised contextual classifier based on a region-growth algorithm

    NASA Astrophysics Data System (ADS)

    Lira, Jorge; Maletti, Gabriela

    2002-10-01

    A supervised classification scheme to segment optical multi-spectral images has been developed. In this classifier, an automated region-growth algorithm delineates the training sets. This algorithm handles three parameters: an initial pixel seed, a window size and a threshold for each class. A suitable pixel seed is manually implanted through visual inspection of the image classes. The best value for the window and the threshold are obtained from a spectral distance and heuristic criteria. This distance is calculated from a mathematical model of spectral separability. A pixel is incorporated into a region if a spectral homogeneity criterion is satisfied in the pixel-centered window for a given threshold. The homogeneity criterion is obtained from the model of spectral distance. The set of pixels forming a region represents a statistically valid sample of a defined class signaled by the initial pixel seed. The grown regions therefore constitute suitable training sets for each class. Comparing the statistical behavior of the pixel population of a sliding window with that of each class performs the classification. For region-growth, a window size is employed for each class. For classification, the centered pixel of the sliding window is labeled as belonging to a class if its spectral distance is a minimum to the class. The window size used for classification is a function of the best separability between the classes. A series of examples, employing synthetic and satellite images are presented to show the value of this classifier. The goodness of the segmentation is evaluated by means of the κ coefficient and a visual inspection of the results.

  20. Classifying Response Correctness across Different Task Sets: A Machine Learning Approach

    PubMed Central

    Wascher, Edmund; Falkenstein, Michael

    2016-01-01

    Erroneous behavior usually elicits a distinct pattern in neural waveforms. In particular, inspection of the concurrent recorded electroencephalograms (EEG) typically reveals a negative potential at fronto-central electrodes shortly following a response error (Ne or ERN) as well as an error-awareness-related positivity (Pe). Seemingly, the brain signal contains information about the occurrence of an error. Assuming a general error evaluation system, the question arises whether this information can be utilized in order to classify behavioral performance within or even across different cognitive tasks. In the present study, a machine learning approach was employed to investigate the outlined issue. Ne as well as Pe were extracted from the single-trial EEG signals of participants conducting a flanker and a mental rotation task and subjected to a machine learning classification scheme (via a support vector machine, SVM). Overall, individual performance in the flanker task was classified more accurately, with accuracy rates of above 85%. Most importantly, it was even feasible to classify responses across both tasks. In particular, an SVM trained on the flanker task could identify erroneous behavior with almost 70% accuracy in the EEG data recorded during the rotation task, and vice versa. Summed up, we replicate that the response-related EEG signal can be used to identify erroneous behavior within a particular task. Going beyond this, it was possible to classify response types across functionally different tasks. Therefore, the outlined methodological approach appears promising with respect to future applications. PMID:27032108

  1. Dynamic artificial bee colony algorithm for multi-parameters optimization of support vector machine-based soft-margin classifier

    NASA Astrophysics Data System (ADS)

    Yan, Yiming; Zhang, Ye; Gao, Fengjiao

    2012-12-01

    This article proposes a `dynamic' artificial bee colony (D-ABC) algorithm for solving optimizing problems. It overcomes the poor performance of artificial bee colony (ABC) algorithm, when applied to multi-parameters optimization. A dynamic `activity' factor is introduced to D-ABC algorithm to speed up convergence and improve the quality of solution. This D-ABC algorithm is employed for multi-parameters optimization of support vector machine (SVM)-based soft-margin classifier. Parameter optimization is significant to improve classification performance of SVM-based classifier. Classification accuracy is defined as the objection function, and the many parameters, including `kernel parameter', `cost factor', etc., form a solution vector to be optimized. Experiments demonstrate that D-ABC algorithm has better performance than traditional methods for this optimizing problem, and better parameters of SVM are obtained which lead to higher classification accuracy.

  2. Achieving Algorithmic Resilience for Temporal Integration through Spectral Deferred Corrections

    SciTech Connect

    Grout, R. W.; Kolla, H.; Minion, M. L.; Bell, J. B.

    2015-04-06

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.

  3. Automatic recognition of myeloma cells in microscopic images using bottleneck algorithm, modified watershed and SVM classifier.

    PubMed

    Saeedizadeh, Z; Mehri Dehnavi, A; Talebi, A; Rabbani, H; Sarrafzadeh, O; Vard, A

    2015-01-01

    Plasma cells are developed from B lymphocytes, a type of white blood cells that is generated in the bone marrow. The plasma cells produce antibodies to fight with bacteria and viruses and stop infection and disease. Multiple myeloma is a cancer of plasma cells that collections of abnormal plasma cells (myeloma cells) accumulate in the bone marrow. The definitive diagnosis of multiple myeloma is done by searching for myeloma cells in the bone marrow slides through a microscope. Diagnosis of myeloma cells from bone marrow smears is a subjective and time-consuming task for pathologists. Also, because of depending on final decision on human eye and opinion, error risk in decision may occur. Sometimes, existence of infection in body causes plasma cell's increment which could be diagnosed wrongly as multiple myeloma. The computer diagnostic process will reduce the diagnostic time and also can be worked as a second opinion for pathologists. This study presents a computer-aided diagnostic method for myeloma cells diagnosis from bone marrow smears. At first, white blood cells consist of plasma cells and other marrow cells are separated from the red blood cells and background. Then, plasma cells are detected from other marrow cells by feature extraction and series of decision rules. Finally, normal plasma cells and myeloma cells could be classified easily by a classifier. This algorithm is applied on 50 digital images that are provided from bone marrow aspiration smears. These images contain 678 cells: 132 normal plasma cells, 256 myeloma cells and 290 other types of marrow cells. Applying the computer-aided diagnostic method for identifying myeloma cells on provided database showed a sensitivity of 96.52%; specificity of 93.04% and precision of 95.28%. PMID:26457371

  4. Looking for Childhood-Onset Schizophrenia: Diagnostic Algorithms for Classifying Children and Adolescents with Psychosis

    PubMed Central

    Kataria, Rachna; Gochman, Peter; Dasgupta, Abhijit; Malley, James D.; Rapoport, Judith; Gogtay, Nitin

    2014-01-01

    Abstract Objective: Among children <13 years of age with persistent psychosis and contemporaneous decline in functioning, it is often difficult to determine if the diagnosis of childhood onset schizophrenia (COS) is warranted. Despite decades of experience, we have up to a 44% false positive screening diagnosis rate among patients identified as having probable or possible COS; final diagnoses are made following inpatient hospitalization and medication washout. Because our lengthy medication-free observation is not feasible in clinical practice, we constructed diagnostic classifiers using screening data to assist clinicians practicing in the community or academic centers. Methods: We used cross-validation, logistic regression, receiver operating characteristic (ROC) analysis, and random forest to determine the best algorithm for classifying COS (n=85) versus histories of psychosis and impaired functioning in children and adolescents who, at screening, were considered likely to have COS, but who did not meet diagnostic criteria for schizophrenia after medication washout and inpatient observation (n=53). We used demographics, clinical history measures, intelligence quotient (IQ) and screening rating scales, and number of typical and atypical antipsychotic medications as our predictors. Results: Logistic regression models using nine, four, and two predictors performed well with positive predictive values>90%, overall accuracy>77%, and areas under the curve (AUCs)>86%. Conclusions: COS can be distinguished from alternate disorders with psychosis in children and adolescents; greater levels of positive and negative symptoms and lower levels of depression combine to make COS more likely. We include a worksheet so that clinicians in the community and academic centers can predict the probability that a young patient may be schizophrenic, using only two ratings. PMID:25019955

  5. The Construction of Support Vector Machine Classifier Using the Firefly Algorithm

    PubMed Central

    Chao, Chih-Feng; Horng, Ming-Huwi

    2015-01-01

    The setting of parameters in the support vector machines (SVMs) is very important with regard to its accuracy and efficiency. In this paper, we employ the firefly algorithm to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier. The proposed method is called the firefly-based SVM (firefly-SVM). This tool is not considered the feature selection, because the SVM, together with feature selection, is not suitable for the application in a multiclass classification, especially for the one-against-all multiclass SVM. In experiments, binary and multiclass classifications are explored. In the experiments on binary classification, ten of the benchmark data sets of the University of California, Irvine (UCI), machine learning repository are used; additionally the firefly-SVM is applied to the multiclass diagnosis of ultrasonic supraspinatus images. The classification performance of firefly-SVM is also compared to the original LIBSVM method associated with the grid search method and the particle swarm optimization based SVM (PSO-SVM). The experimental results advocate the use of firefly-SVM to classify pattern classifications for maximum accuracy. PMID:25802511

  6. An environment-adaptive management algorithm for hearing-support devices incorporating listening situation and noise type classifiers.

    PubMed

    Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young

    2015-04-01

    In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments. PMID:25284135

  7. An environment-adaptive management algorithm for hearing-support devices incorporating listening situation and noise type classifiers.

    PubMed

    Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young

    2015-04-01

    In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments.

  8. Goldindec: A Novel Algorithm for Raman Spectrum Baseline Correction

    PubMed Central

    Liu, Juntao; Sun, Jianyang; Huang, Xiuzhen; Li, Guojun; Liu, Binqiang

    2016-01-01

    Raman spectra have been widely used in biology, physics, and chemistry and have become an essential tool for the studies of macromolecules. Nevertheless, the raw Raman signal is often obscured by a broad background curve (or baseline) due to the intrinsic fluorescence of the organic molecules, which leads to unpredictable negative effects in quantitative analysis of Raman spectra. Therefore, it is essential to correct this baseline before analyzing raw Raman spectra. Polynomial fitting has proven to be the most convenient and simplest method and has high accuracy. In polynomial fitting, the cost function used and its parameters are crucial. This article proposes a novel iterative algorithm named Goldindec, freely available for noncommercial use as noted in text, with a new cost function that not only conquers the influence of great peaks but also solves the problem of low correction accuracy when there is a high peak number. Goldindec automatically generates parameters from the raw data rather than by empirical choice, as in previous methods. Comparisons with other algorithms on the benchmark data show that Goldindec has a higher accuracy and computational efficiency, and is hardly affected by great peaks, peak number, and wavenumber. PMID:26037638

  9. Efficient single image non-uniformity correction algorithm

    NASA Astrophysics Data System (ADS)

    Tendero, Y.; Gilles, J.; Landeau, S.; Morel, J. M.

    2010-10-01

    This paper introduces a new way to correct the non-uniformity (NU) in uncooled infrared-type images. The main defect of these uncooled images is the lack of a column (resp. line) time-dependent cross-calibration, resulting in a strong column (resp. line) and time dependent noise. This problem can be considered as a 1D flicker of the columns inside each frame. Thus, classic movie deflickering algorithms can be adapted, to equalize the columns (resp. the lines). The proposed method therefore applies to the series formed by the columns of an infrared image a movie deflickering algorithm. The obtained single image method works on static images, and therefore requires no registration, no camera motion compensation, and no closed aperture sensor equalization. Thus, the method has only one camera dependent parameter, and is landscape independent. This simple method will be compared to a state of the art total variation single image correction on raw real and simulated images. The method is real time, requiring only two operations per pixel. It involves no test-pattern calibration and produces no "ghost artifacts".

  10. The Algorithm Theoretical Basis Document for Tidal Corrections

    NASA Technical Reports Server (NTRS)

    Fricker, Helen A.; Ridgway, Jeff R.; Minster, Jean-Bernard; Yi, Donghui; Bentley, Charles R.`

    2012-01-01

    This Algorithm Theoretical Basis Document deals with the tidal corrections that need to be applied to range measurements made by the Geoscience Laser Altimeter System (GLAS). These corrections result from the action of ocean tides and Earth tides which lead to deviations from an equilibrium surface. Since the effect of tides is dependent of the time of measurement, it is necessary to remove the instantaneous tide components when processing altimeter data, so that all measurements are made to the equilibrium surface. The three main tide components to consider are the ocean tide, the solid-earth tide and the ocean loading tide. There are also long period ocean tides and the pole tide. The approximate magnitudes of these components are illustrated in Table 1, together with estimates of their uncertainties (i.e. the residual error after correction). All of these components are important for GLAS measurements over the ice sheets since centimeter-level accuracy for surface elevation change detection is required. The effect of each tidal component is to be removed by approximating their magnitude using tidal prediction models. Conversely, assimilation of GLAS measurements into tidal models will help to improve them, especially at high latitudes.

  11. A blind test of correction algorithms for daily inhomogeneities

    NASA Astrophysics Data System (ADS)

    Stepanek, Petr; Venema, Victor; Guijarro, Jose; Nemec, Johanna; Zahradnicek, Pavel; Hadzimustafic, Jasmina

    2013-04-01

    As part of the COST Action HOME (Advances in homogenisation methods of climate series: an integrated approach), a dataset was generated that serves as a validation tool for correction of daily inhomogeneities. The dataset contains daily air temperature data and was generated based on the temperature series from the Czech Republic. The validation dataset has three different types of series: network, pair and pair-dedicated data. Different types of inhomogeneities have been inserted into the series. Parametric breaks in the first three moments were introduced and the influence of relocation was simulated by exchanging the distribution of two nearby stations. The participants have returned several contributions, including methods that are currently used: HOM, SPLIDHOM (with various modifications like HOMAD and bootstrapped SPLIDHOM), QM (RHtestsV3 software), DAP (ProClimDB), HCL (Climatol), MASH and also simple delta method. The quality of the homogenised data was measured by a large range of metrics, the most important ones are the RMSE and the trends in the moments. Thanks to RHtestsV3 algorithms we could also assess relative and absolute homogenization results. As expected, the simpler methods, correcting only the mean, are best at reducing the RMSE. For more information on the COST Action on homogenisation see: http://www.homogenisation.org/

  12. Structure and weights optimisation of a modified Elman network emotion classifier using hybrid computational intelligence algorithms: a comparative study

    NASA Astrophysics Data System (ADS)

    Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood

    2015-10-01

    Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.

  13. Development of an Algorithm to Classify Colonoscopy Indication from Coded Health Care Data

    PubMed Central

    Adams, Kenneth F.; Johnson, Eric A.; Chubak, Jessica; Kamineni, Aruna; Doubeni, Chyke A.; Buist, Diana S.M.; Williams, Andrew E.; Weinmann, Sheila; Doria-Rose, V. Paul; Rutter, Carolyn M.

    2015-01-01

    Introduction: Electronic health data are potentially valuable resources for evaluating colonoscopy screening utilization and effectiveness. The ability to distinguish screening colonoscopies from exams performed for other purposes is critical for research that examines factors related to screening uptake and adherence, and the impact of screening on patient outcomes, but distinguishing between these indications in secondary health data proves challenging. The objective of this study is to develop a new and more accurate algorithm for identification of screening colonoscopies using electronic health data. Methods: Data from a case-control study of colorectal cancer with adjudicated colonoscopy indication was used to develop logistic regression-based algorithms. The proposed algorithms predict the probability that a colonoscopy was indicated for screening, with variables selected for inclusion in the models using the Least Absolute Shrinkage and Selection Operator (LASSO). Results: The algorithms had excellent classification accuracy in internal validation. The primary, restricted model had AUC= 0.94, sensitivity=0.91, and specificity=0.82. The secondary, extended model had AUC=0.96, sensitivity=0.88, and specificity=0.90. Discussion: The LASSO approach enabled estimation of parsimonious algorithms that identified screening colonoscopies with high accuracy in our study population. External validation is needed to replicate these results and to explore the performance of these algorithms in other settings. PMID:26290883

  14. Classifying performance impairment in response to sleep loss using pattern recognition algorithms on single session testing.

    PubMed

    St Hilaire, Melissa A; Sullivan, Jason P; Anderson, Clare; Cohen, Daniel A; Barger, Laura K; Lockley, Steven W; Klerman, Elizabeth B

    2013-01-01

    There is currently no "gold standard" marker of cognitive performance impairment resulting from sleep loss. We utilized pattern recognition algorithms to determine which features of data collected under controlled laboratory conditions could most reliably identify cognitive performance impairment in response to sleep loss using data from only one testing session, such as would occur in the "real world" or field conditions. A training set for testing the pattern recognition algorithms was developed using objective Psychomotor Vigilance Task (PVT) and subjective Karolinska Sleepiness Scale (KSS) data collected from laboratory studies during which subjects were sleep deprived for 26-52h. The algorithm was then tested in data from both laboratory and field experiments. The pattern recognition algorithm was able to identify performance impairment with a single testing session in individuals studied under laboratory conditions using PVT, KSS, length of time awake and time of day information with sensitivity and specificity as high as 82%. When this algorithm was tested on data collected under real-world conditions from individuals whose data were not in the training set, accuracy of predictions for individuals categorized with low performance impairment were as high as 98%. Predictions for medium and severe performance impairment were less accurate. We conclude that pattern recognition algorithms may be a promising method for identifying performance impairment in individuals using only current information about the individual's behavior. Single testing features (e.g., number of PVT lapses) with high correlation with performance impairment in the laboratory setting may not be the best indicators of performance impairment under real-world conditions. Pattern recognition algorithms should be further tested for their ability to be used in conjunction with other assessments of sleepiness in real-world conditions to quantify performance impairment in response to sleep loss.

  15. Classifying performance impairment in response to sleep loss using pattern recognition algorithms on single session testing

    PubMed Central

    St. Hilaire, Melissa A.; Sullivan, Jason P.; Anderson, Clare; Cohen, Daniel A.; Barger, Laura K.; Lockley, Steven W.; Klerman, Elizabeth B.

    2012-01-01

    There is currently no “gold standard” marker of cognitive performance impairment resulting from sleep loss. We utilized pattern recognition algorithms to determine which features of data collected under controlled laboratory conditions could most reliably identify cognitive performance impairment in response to sleep loss using data from only one testing session, such as would occur in the “real world” or field conditions. A training set for testing the pattern recognition algorithms was developed using objective Psychomotor Vigilance Task (PVT) and subjective Karolinska Sleepiness Scale (KSS) data collected from laboratory studies during which subjects were sleep deprived for 26 – 52 hours. The algorithm was then tested in data from both laboratory and field experiments. The pattern recognition algorithm was able to identify performance impairment with a single testing session in individuals studied under laboratory conditions using PVT, KSS, length of time awake and time of day information with sensitivity and specificity as high as 82%. When this algorithm was tested on data collected under real-world conditions from individuals whose data were not in the training set, accuracy of predictions for individuals categorized with low performance impairment were as high as 98%. Predictions for medium and severe performance impairment were less accurate. We conclude that pattern recognition algorithms may be a promising method for identifying performance impairment in individuals using only current information about the individual’s behavior. Single testing features (e.g., number of PVT lapses) with high correlation with performance impairment in the laboratory setting may not be the best indicators of performance impairment under real-world conditions. Pattern recognition algorithms should be further tested for their ability to be used in conjunction with other assessments of sleepiness in real-world conditions to quantify performance impairment in

  16. An algorithm for the treatment of schizophrenia in the correctional setting: the Forensic Algorithm Project.

    PubMed

    Buscema, C A; Abbasi, Q A; Barry, D J; Lauve, T H

    2000-10-01

    The Forensic Algorithm Project (FAP) was born of the need for a holistic approach in the treatment of the inmate with schizophrenia. Schizophrenia was chosen as the first entity to be addressed by the algorithm because of its refractory nature and high rate of recidivism in the correctional setting. Schizophrenia is regarded as a spectrum disorder, with symptom clusters and behaviors ranging from positive to negative symptoms to neurocognitive dysfunction and affective instability. Furthermore, the clinical picture is clouded by Axis II symptomatology (particularly prominent in the inmate population), comorbid Axis I disorders, and organicity. Four subgroups of schizophrenia were created to coincide with common clinical presentations in the forensic inpatient facility and also to parallel 4 tracks of intervention, consisting of pharmacologic management and programming recommendations. The algorithm begins with any antipsychotic medication and proceeds to atypical neuroleptic usage, augmentation with other psychotropic agents, and, finally, the use of clozapine as the common pathway for refractory schizophrenia. Outcome measurement of pharmacologic intervention is assessed every 6 weeks through the use of a 4-item subscale, specific for each forensic subgroup. A "floating threshold" of 40% symptom severity reduction on Positive and Negative Syndrome Scale and Brief Psychiatric Rating Scale items over a 6-week period is considered an indication for neuroleptic continuation. The forensic algorithm differs from other clinical practice guidelines in that specific programming in certain prison environments is stipulated. Finally, a social commentary on the importance of state-of-the-art psychiatric treatment for all members of society is woven into the clinical tapestry of this article. PMID:11078038

  17. Atmospheric Correction Prototype Algorithm for High Spatial Resolution Multispectral Earth Observing Imaging Systems

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary

    2006-01-01

    This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.

  18. A fuzzy hill-climbing algorithm for the development of a compact associative classifier

    NASA Astrophysics Data System (ADS)

    Mitra, Soumyaroop; Lam, Sarah S.

    2012-02-01

    Classification, a data mining technique, has widespread applications including medical diagnosis, targeted marketing, and others. Knowledge discovery from databases in the form of association rules is one of the important data mining tasks. An integrated approach, classification based on association rules, has drawn the attention of the data mining community over the last decade. While attention has been mainly focused on increasing classifier accuracies, not much efforts have been devoted towards building interpretable and less complex models. This paper discusses the development of a compact associative classification model using a hill-climbing approach and fuzzy sets. The proposed methodology builds the rule-base by selecting rules which contribute towards increasing training accuracy, thus balancing classification accuracy with the number of classification association rules. The results indicated that the proposed associative classification model can achieve competitive accuracies on benchmark datasets with continuous attributes and lend better interpretability, when compared with other rule-based systems.

  19. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    NASA Astrophysics Data System (ADS)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  20. [Locally Dynamically Moving Average Algorithm for the Fully Automated Baseline Correction of Raman Spectrum].

    PubMed

    Gao, Peng-fei; Yang, Rui; Ji, Jiang; Guo, Han-ming; Hu, Qi; Zhuang, Song-lin

    2015-05-01

    The baseline correction is an, extremely important spectral preprocessing step and can significantly improve the accuracy of the subsequent spectral analysis algorithm. At present most of the baseline correction algorithms are manual and semi-automated. The manual baseline correction depends on the user experience and its accuracy is greatly affected by the subjective factor. The semi-automated baseline correction needs to set different optimizing parameters for different Raman spectra, which will be inconvenient to users. In this paper, a locally.dynamically moving average algorithm (LDMA) for the fully automated baseline correction is presented and its basic ideas.and steps are demonstrated in detail. In the LDMA algorithm the modified moving averaging algorithm (MMA) is used to strip the Raman peaks. By automatically finding the baseline subintervals of the raw Raman spectrum to divide the total spectrum range into multi Raman peak subintervals, the LDMA algorithm succeed in dynamically changing the window half width of the MA algorithm and controlling the numbers of the smoothing iterations in each Raman peak subinterval. Hence, the phenomena of overcorrection and under-correction are avoided to the most degree. The LDMA algorithm has achieved great effect not only to the synthetic Raman spectra with the convex, exponential, or sigmoidal baseline but also to the real Raman spectra.

  1. Algorithms for Relative Radiometric Correction in Earth Observing Systems Resource-P and Canopus-V

    NASA Astrophysics Data System (ADS)

    Zenin, V. A.; Eremeev, V. V.; Kuznetcov, A. E.

    2016-06-01

    The present paper has considered two algorithms of the relative radiometric correction of information obtained from a multimatrix imagery instrument of the spacecraft "Resource-P" and frame imagery systems of the spacecraft "Canopus-V". The first algorithm is intended for elimination of vertical stripes on the image that are caused by difference in transfer characteristics of CCD matrices and CCD detectors. Correction coefficients are determined on the basis of analysis of images that are homogeneous by brightness. The second algorithm ensures an acquisition of microframes homogeneous by brightness from which seamless images of the Earth surface are synthesized. Examples of practical usage of the developed algorithms are mentioned.

  2. Non-uniformity correction for infrared focal plane array with image based on neural network algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Tingting; Yu, Junsheng; Zhou, Yun; Xing, Yanmin; Jiang, Yadong

    2010-10-01

    Non-uniformity response of detectors based on infrared focal plane array (IRFPA) result in fixed pattern noise (FPN) due to detector materials' non-uniformity and fabrication technology. Once fixed pattern noise added to the infrared image, focal plane image quality will have a serious impact. So non-uniformity correction (NUC) is a key technology in IRFPA application. This paper briefly introduces the traditional neural network algorithm and puts forward an improved algorithm for the neural network algorithm for NUC of infrared focal plane arrays. The main improvement is focused on the estimation method of desired image. The algorithm is used to analyze the image array, correcting data on the array both in space and in time. The correction image in the text is from the infrared data sequence which is more successful of three frames of data obtained. It was found that the estimated image corrected by new algorithm is closer to real image than the estimated image corrected by other algorithm. Moreover, we simulated the new proposed algorithm using Matlab. The results showed that the method of spatial and temporal co-correction of the images is more realistic than the original image.

  3. An enhanced non-uniformity correction algorithm for IRFPA based on neural network

    NASA Astrophysics Data System (ADS)

    Wang, BingJian; Liu, ShangQian; Bai, LiPing

    2008-04-01

    Influenced by detector materials' non-uniformity, growth and etching techniques, etc., every detector's responsivity of infrared focal plane arrays (IRFPA) is different, which results in non-uniformity of IRFPA. And non-uniformity of IRFPA generates fixed pattern noises (FPN) that are superposed on infrared image. And it may degrade the infrared image quality, which greatly limits the application of IRFPA. Non-uniformity correction (NUC) is an important technique for IRFPA. The traditional non-uniformity correction algorithm based on neural network and its modified algorithms are analyzed in this paper. And a new improved non-uniformity correction algorithm based on neural network is proposed in this paper. In this algorithm, the desired image is estimated by using three successive images in an infrared sequence. And blurring effect caused by motion is avoided by applying implicit motion detection and edge detection. So the estimation image is closer to real image than the estimation image estimated by other algorithms, which results in fast convergence speed of correction parameters. A comparison is made to these algorithms in this paper. And experimental results show that the algorithm proposed in this paper can correct the non-uniformity of IRFPA effectively and it prevails over other algorithms based on neural network.

  4. Weighted SVD algorithm for close-orbit correction and 10 Hz feedback in RHIC

    SciTech Connect

    Liu C.; Hulsart, R.; Marusic, A.; Michnoff, R.; Minty, M.; Ptitsyn, V.

    2012-05-20

    Measurements of the beam position along an accelerator are typically treated equally using standard SVD-based orbit correction algorithms so distributing the residual errors, modulo the local beta function, equally at the measurement locations. However, sometimes a more stable orbit at select locations is desirable. In this paper, we introduce an algorithm for weighting the beam position measurements to achieve a more stable local orbit. The results of its application to close-orbit correction and 10 Hz orbit feedback are presented.

  5. Phase correction algorithms for a snapshot hyperspectral imaging system

    NASA Astrophysics Data System (ADS)

    Chan, Victoria C.; Kudenov, Michael; Dereniak, Eustace

    2015-09-01

    We present image processing algorithms that improve spatial and spectral resolution on the Snapshot Hyperspectral Imaging Fourier Transform (SHIFT) spectrometer. Final measurements are stored in the form of threedimensional datacubes containing the scene's spatial and spectral information. We discuss calibration procedures, review post-processing methods, and present preliminary results from proof-of-concept experiments.

  6. Self-Correcting HVAC Controls: Algorithms for Sensors and Dampers in Air-Handling Units

    SciTech Connect

    Fernandez, Nicholas; Brambley, Michael R.; Katipamula, Srinivas

    2009-12-31

    This report documents the self-correction algorithms developed in the Self-Correcting Heating, Ventilating and Air-Conditioning (HVAC) Controls project funded jointly by the Bonneville Power Administration and the Building Technologies Program of the U.S. Department of Energy. The algorithms address faults for temperature sensors, humidity sensors, and dampers in air-handling units and correction of persistent manual overrides of automated control systems. All faults considered create energy waste when left uncorrected as is frequently the case in actual systems.

  7. Improved near-infrared ocean reflectance correction algorithm for satellite ocean color data processing.

    PubMed

    Jiang, Lide; Wang, Menghua

    2014-09-01

    A new approach for the near-infrared (NIR) ocean reflectance correction in atmospheric correction for satellite ocean color data processing in coastal and inland waters is proposed, which combines the advantages of the three existing NIR ocean reflectance correction algorithms, i.e., Bailey et al. (2010) [Opt. Express18, 7521 (2010)Appl. Opt.39, 897 (2000)Opt. Express20, 741 (2012)], and is named BMW. The normalized water-leaving radiance spectra nLw(λ) obtained from this new NIR-based atmospheric correction approach are evaluated against those obtained from the shortwave infrared (SWIR)-based atmospheric correction algorithm, as well as those from some existing NIR atmospheric correction algorithms based on several case studies. The scenes selected for case studies are obtained from two different satellite ocean color sensors, i.e., the Moderate Resolution Imaging Spectroradiometer (MODIS) on the satellite Aqua and the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP), with an emphasis on several turbid water regions in the world. The new approach has shown to produce nLw(λ) spectra most consistent with the SWIR results among all NIR algorithms. Furthermore, validations against the in situ measurements also show that in less turbid water regions the new approach produces reasonable and similar results comparable to the current operational algorithm. In addition, by combining the new NIR atmospheric correction with the SWIR-based approach, the new NIR-SWIR atmospheric correction can produce further improved ocean color products. The new NIR atmospheric correction can be implemented in a global operational satellite ocean color data processing system.

  8. Improved near-infrared ocean reflectance correction algorithm for satellite ocean color data processing.

    PubMed

    Jiang, Lide; Wang, Menghua

    2014-09-01

    A new approach for the near-infrared (NIR) ocean reflectance correction in atmospheric correction for satellite ocean color data processing in coastal and inland waters is proposed, which combines the advantages of the three existing NIR ocean reflectance correction algorithms, i.e., Bailey et al. (2010) [Opt. Express18, 7521 (2010)Appl. Opt.39, 897 (2000)Opt. Express20, 741 (2012)], and is named BMW. The normalized water-leaving radiance spectra nLw(λ) obtained from this new NIR-based atmospheric correction approach are evaluated against those obtained from the shortwave infrared (SWIR)-based atmospheric correction algorithm, as well as those from some existing NIR atmospheric correction algorithms based on several case studies. The scenes selected for case studies are obtained from two different satellite ocean color sensors, i.e., the Moderate Resolution Imaging Spectroradiometer (MODIS) on the satellite Aqua and the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP), with an emphasis on several turbid water regions in the world. The new approach has shown to produce nLw(λ) spectra most consistent with the SWIR results among all NIR algorithms. Furthermore, validations against the in situ measurements also show that in less turbid water regions the new approach produces reasonable and similar results comparable to the current operational algorithm. In addition, by combining the new NIR atmospheric correction with the SWIR-based approach, the new NIR-SWIR atmospheric correction can produce further improved ocean color products. The new NIR atmospheric correction can be implemented in a global operational satellite ocean color data processing system. PMID:25321543

  9. Comparison of atmospheric correction algorithms for the Coastal Zone Color Scanner

    NASA Technical Reports Server (NTRS)

    Tanis, F. J.; Jain, S. C.

    1984-01-01

    Before Nimbus-7 Costal Zone Color Scanner (CZC) data can be used to distinguish between coastal water types, methods must be developed for the removal of spatial variations in aerosol path radiance. These can dominate radiance measurements made by the satellite. An assessment is presently made of the ability of four different algorithms to quantitatively remove haze effects; each was adapted for the extraction of the required scene-dependent parameters during an initial pass through the data set The CZCS correction algorithms considered are (1) the Gordon (1981, 1983) algorithm; (2) the Smith and Wilson (1981) iterative algorityhm; (3) the pseudooptical depth method; and (4) the residual component algorithm.

  10. Approximate string matching algorithms for limited-vocabulary OCR output correction

    NASA Astrophysics Data System (ADS)

    Lasko, Thomas A.; Hauser, Susan E.

    2000-12-01

    Five methods for matching words mistranslated by optical character recognition to their most likely match in a reference dictionary were tested on data from the archives of the National Library of Medicine. The methods, including an adaptation of the cross correlation algorithm, the generic edit distance algorithm, the edit distance algorithm with a probabilistic substitution matrix, Bayesian analysis, and Bayesian analysis on an actively thinned reference dictionary were implemented and their accuracy rates compared. Of the five, the Bayesian algorithm produced the most correct matches (87%), and had the advantage of producing scores that have a useful and practical interpretation.

  11. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    PubMed

    McDonnell, Mark D; Tissera, Migel D; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  12. A filtered backprojection algorithm for axial head motion correction in fan-beam SPECT.

    PubMed

    Li, J; Jaszczak, R J; Coleman, R E

    1995-12-01

    In this study we present an approximate, but practical, three-dimensional filtered backprojection (FBP) reconstruction algorithm in fan-beam SPECT to correct for axial motion (both translation and rotation). A one-dimensional filter kernel was applied to the projections. It is assumed that the object is rigid and that its axial motion can be characterized by three components: one-dimensional translation and yaw and pitch rotations. It is further assumed that the motions that have occurred during the SPECT acquisition have been determined separately. The determined angular-view-dependent translation/rotation parameters were incorporated into the proposed FBP algorithm to correct for multiple axial head motions. The proposed axial head motion correction algorithm was evaluated using simulated three-dimensional Hoffman brain phantom data. Projections both with axial translation and with axial rotation, and with their combinations were generated. Images of a Hoffman brain phantom reconstructed using the proposed FBP algorithm and the conventional FBP algorithm were compared. Artefacts were observed in images without motion correction, but the artefacts were greatly reduced using the proposed reconstruction algorithm.

  13. An Algorithm to Atmospherically Correct Visible and Thermal Airborne Imagery

    NASA Technical Reports Server (NTRS)

    Rickman, Doug L.; Luvall, Jeffrey C.; Schiller, Stephen; Arnold, James E. (Technical Monitor)

    2000-01-01

    The program Watts implements a system of physically based models developed by the authors, described elsewhere, for the removal of atmospheric effects in multispectral imagery. The band range we treat covers the visible, near IR and the thermal IR. Input to the program begins with atmospheric pal red models specifying transmittance and path radiance. The system also requires the sensor's spectral response curves and knowledge of the scanner's geometric definition. Radiometric characterization of the sensor during data acquisition is also necessary. While the authors contend that active calibration is critical for serious analytical efforts, we recognize that most remote sensing systems, either airborne or space borne, do not as yet attain that minimal level of sophistication. Therefore, Watts will also use semi-active calibration where necessary and available. All of the input is then reduced to common terms, in terms of the physical units. From this it Is then practical to convert raw sensor readings into geophysically meaningful units. There are a large number of intricate details necessary to bring an algorithm or this type to fruition and to even use the program. Further, at this stage of development the authors are uncertain as to the optimal presentation or minimal analytical techniques which users of this type of software must have. Therefore, Watts permits users to break out and analyze the input in various ways. Implemented in REXX under OS/2 the program is designed with attention to the probability that it will be ported to other systems and other languages. Further, as it is in REXX, it is relatively simple for anyone that is literate in any computer language to open the code and modify to meet their needs. The authors have employed Watts in their research addressing precision agriculture and urban heat island.

  14. Mach-uniformity through the coupled pressure and temperature correction algorithm

    SciTech Connect

    Nerinckx, Krista . E-mail: Krista.Nerinckx@UGent.be; Vierendeels, Jan . E-mail: Jan.Vierendeels@UGent.be; Dick, Erik . E-mail: Erik.Dick@UGent.be

    2005-07-01

    We present a new type of algorithm: the coupled pressure and temperature correction algorithm. It is situated in between the fully coupled and the fully segregated approach, and is constructed such that Mach-uniform accuracy and efficiency are obtained. The essential idea is the separation of the convective and the acoustic/thermodynamic phenomena: a convective predictor is followed by an acoustic/thermodynamic corrector. For a general case, the corrector consists of a coupled solution of the energy and the continuity equations for both pressure and temperature corrections. For the special case of an adiabatic perfect gas flow, the algorithm reduces to a fully segregated method, with a pressure-correction equation based on the energy equation. Various test cases are considered, which confirm that Mach-uniformity is obtained.

  15. Assessment, Validation, and Refinement of the Atmospheric Correction Algorithm for the Ocean Color Sensors. Chapter 19

    NASA Technical Reports Server (NTRS)

    Wang, Menghua

    2003-01-01

    The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.

  16. Multiprocessing and Correction Algorithm of 3D-models for Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Anamova, R. R.; Zelenov, S. V.; Kuprikov, M. U.; Ripetskiy, A. V.

    2016-07-01

    This article addresses matters related to additive manufacturing preparation. A layer-by-layer model presentation was developed on the basis of a routing method. Methods for correction of errors in the layer-by-layer model presentation were developed. A multiprocessing algorithm for forming an additive manufacturing batch file was realized.

  17. [Validation and analysis of water column correction algorithm at Sanya Bay].

    PubMed

    Yang, Chao-yu; Yang, Ding-tian; Ye, Hai-bin; Cao, Wen-xi

    2011-07-01

    Water column correction has been a substantial challenge for remote sensing. In order to improve the accuracy of coastal ocean monitoring where optical properties are complex, optical property of shallow water at Sanya Bay and the suitable water column correction algorithms were studies in the present paper. The authors extracted the bottom reflectance without water column effects by using a water column correction algorithm which is based on the simulation of the underwater light field in idealized water. And we compared the results which were calculated by the model and Christian's model respectively. Based on a detailed analysis, we concluded that: Because the optical properties of Sanya Bay are complex and vary greatly with location, Christian's model lost its advantage in the area. Conversely, the bottom reflectance calculating by the algorithm based on the simulation of the underwater light field in idealized water agreed well with in situ measured bottom reflectance, although the reflectance was lower than in situ measured reflectance value between 400 and 500 nm. So, it is reasonable to extract bottom information by using the water column correction algorithm in local bay area where optical properties are complex.

  18. Brut: Automatic bubble classifier

    NASA Astrophysics Data System (ADS)

    Beaumont, Christopher; Goodman, Alyssa; Williams, Jonathan; Kendrew, Sarah; Simpson, Robert

    2014-07-01

    Brut, written in Python, identifies bubbles in infrared images of the Galactic midplane; it uses a database of known bubbles from the Milky Way Project and Spitzer images to build an automatic bubble classifier. The classifier is based on the Random Forest algorithm, and uses the WiseRF implementation of this algorithm.

  19. A FORTRAN algorithm for correcting normal resistivity logs for borehole diameter and mud resistivity

    USGS Publications Warehouse

    Scott, James Henry

    1978-01-01

    The FORTRAN algorithm described in this report was developed for applying corrections to normal resistivity logs of any electrode spacing for the effects of drilling mud of known resistivity in boreholes of variable diameter. The corrections are based on Schlumberger departure curves that are applicable to normal logs made with a standard Schlumberger electric logging probe with an electrode diameter of 8.5 cm (3.35 in). The FORTRAN algorithm has been generalized to accommodate logs made with other probes with different electrode diameters. Two simplifying assumptions used by Schlumberger in developing the departure curves also apply to the algorithm: (1) bed thickness is assumed to be infinite (at least 10 times larger than the electrode spacing), and (2) invasion of drilling mud into the formation is assumed to be negligible. * The use of a trade name does not necessarily constitute endorsement by the U.S. Geological Survey.

  20. FORTRAN algorithm for correcting normal resistivity logs for borehold diameter and mud resistivity

    SciTech Connect

    Scott, J H

    1983-01-01

    The FORTRAN algorithm described was developed for applying corrections to normal resistivity logs of any electrode spacing for the effects of drilling mud of known resistivity in boreholes of variable diameter. The corrections are based on Schlumberger departure curves that are applicable to normal logs made with a standard Schlumberger electric logging probe with an electrode diameter of 8.5 cm (3.35 in). The FORTRAN algorithm has been generalized to accommodate logs made with other probes with different electrode diameters. Two simplifying assumptions used by Schlumberger in developing the departure curves also apply to the algorithm: (1) bed thickness is assumed to be infinite (at least 10 times larger than the electrode spacing), and (2) invasion of drilling mud into the formation is assumed to be negligible.

  1. A finite size pencil beam algorithm for IMRT dose optimization: density corrections.

    PubMed

    Jeleń, U; Alber, M

    2007-02-01

    For beamlet-based IMRT optimization, fast and less accurate dose computation algorithms are frequently used, while more accurate algorithms are needed to recompute the final dose for verification. In order to speed up the optimization process and ensure close proximity between dose in optimization and verification, proper consideration of dose gradients and tissue inhomogeneity effects should be ensured at every stage of the optimization. Due to their speed, pencil beam algorithms are often used for precalculation of beamlet dose distributions in IMRT treatment planning systems. However, accounting for tissue heterogeneities with these models requires the use of approximate rescaling methods. Recently, a finite size pencil beam (fsPB) algorithm, based on a simple and small set of data, was proposed which was specifically designed for the purpose of dose pre-computation in beamlet-based IMRT. The present work describes the incorporation of 3D density corrections, based on Monte Carlo simulations in heterogeneous phantoms, into this method improving the algorithm accuracy in inhomogeneous geometries while keeping its original speed and simplicity of commissioning. The algorithm affords the full accuracy of 3D density corrections at every stage of the optimization, hence providing the means for density related fluence modulation like penumbra shaping at field edges. PMID:17228109

  2. Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms

    DOE PAGES

    Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.

    2016-04-12

    Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV), downwellingmore » radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities – done for clear-sky scenes – use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes

  3. Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms

    NASA Astrophysics Data System (ADS)

    Dzambo, A. M.; Turner, D. D.; Mlawer, E. J.

    2015-10-01

    Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV), downwelling radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km MSL), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, mid-latitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities - done for clear-sky scenes - use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RH are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes. The cause of this statistical

  4. Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms

    NASA Astrophysics Data System (ADS)

    Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.

    2016-04-01

    Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV), downwelling radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities - done for clear-sky scenes - use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes. The cause of this

  5. Cardamine occulta, the correct species name for invasive Asian plants previously classified as C. flexuosa, and its occurrence in Europe.

    PubMed

    Marhold, Karol; Šlenker, Marek; Kudoh, Hiroshi; Zozomová-Lihová, Judita

    2016-01-01

    The nomenclature of Eastern Asian populations traditionally assigned to Cardamine flexuosa has remained unresolved since 2006, when they were found to be distinct from the European species Cardamine flexuosa. Apart from the informal designation "Asian Cardamine flexuosa", this taxon has also been reported under the names Cardamine flexuosa subsp. debilis or Cardamine hamiltonii. Here we determine its correct species name to be Cardamine occulta and present a nomenclatural survey of all relevant species names. A lectotype and epitype for Cardamine occulta and a neotype for the illegitimate name Cardamine debilis (replaced by Cardamine flexuosa subsp. debilis and Cardamine hamiltonii) are designated here. Cardamine occulta is a polyploid weed that most likely originated in Eastern Asia, but it has also been introduced to other continents, including Europe. Here data is presented on the first records of this invasive species in European countries. The first known record for Europe was made in Spain in 1993, and since then its occurrence has been reported from a number of European countries and regions as growing in irrigated anthropogenic habitats, such as paddy fields or flower beds, and exceptionally also in natural communities such as lake shores.

  6. Cardamine occulta, the correct species name for invasive Asian plants previously classified as C. flexuosa, and its occurrence in Europe

    PubMed Central

    Marhold, Karol; Šlenker, Marek; Kudoh, Hiroshi; Zozomová-Lihová, Judita

    2016-01-01

    Abstract The nomenclature of Eastern Asian populations traditionally assigned to Cardamine flexuosa has remained unresolved since 2006, when they were found to be distinct from the European species Cardamine flexuosa. Apart from the informal designation “Asian Cardamine flexuosa”, this taxon has also been reported under the names Cardamine flexuosa subsp. debilis or Cardamine hamiltonii. Here we determine its correct species name to be Cardamine occulta and present a nomenclatural survey of all relevant species names. A lectotype and epitype for Cardamine occulta and a neotype for the illegitimate name Cardamine debilis (replaced by Cardamine flexuosa subsp. debilis and Cardamine hamiltonii) are designated here. Cardamine occulta is a polyploid weed that most likely originated in Eastern Asia, but it has also been introduced to other continents, including Europe. Here data is presented on the first records of this invasive species in European countries. The first known record for Europe was made in Spain in 1993, and since then its occurrence has been reported from a number of European countries and regions as growing in irrigated anthropogenic habitats, such as paddy fields or flower beds, and exceptionally also in natural communities such as lake shores. PMID:27212882

  7. An Automatic and Power Spectra-based Rotate Correcting Algorithm for Microarray Image.

    PubMed

    Deng, Ning; Duan, Huilong

    2005-01-01

    Microarray image analysis, an important aspect of microarray technology, faces vast amount of data processing. At present, the speed of microarray image analysis is quite limited by excessive manual intervention. The geometric structure of microarray determines that, while being analyzed, microarray image should be collimated in the scanning vertical orientation. If rotation or tilt happens in microarray image, the analysis result may be incorrect. Although some automatic image analysis algorithms are used for microarray, still few methods are reported to calibrate the microarray image rotation problem. In this paper, an automatic rotate correcting algorithm is presented which aims at the deflective problem of microarray image. This method is based on image power spectra. Examined by hundreds of samples of clinical data, the algorithm is proved to achieve high precision. As a result, adopting this algorithm, the overall procedure automation in microarray image analysis can be realized.

  8. Experimental Evaluation of a Deformable Registration Algorithm for Motion Correction in PET-CT Guided Biopsy

    PubMed Central

    Khare, Rahul; Sala, Guillaume; Kinahan, Paul; Esposito, Giuseppe; Banovac, Filip; Cleary, Kevin; Enquobahrie, Andinet

    2015-01-01

    Positron emission tomography computed tomography (PET-CT) images are increasingly being used for guidance during percutaneous biopsy. However, due to the physics of image acquisition, PET-CT images are susceptible to problems due to respiratory and cardiac motion, leading to inaccurate tumor localization, shape distortion, and attenuation correction. To address these problems, we present a method for motion correction that relies on respiratory gated CT images aligned using a deformable registration algorithm. In this work, we use two deformable registration algorithms and two optimization approaches for registering the CT images obtained over the respiratory cycle. The two algorithms are the BSpline and the symmetric forces Demons registration. In the first optmization approach, CT images at each time point are registered to a single reference time point. In the second approach, deformation maps are obtained to align each CT time point with its adjacent time point. These deformations are then composed to find the deformation with respect to a reference time point. We evaluate these two algorithms and optimization approaches using respiratory gated CT images obtained from 7 patients. Our results show that overall the BSpline registration algorithm with the reference optimization approach gives the best results. PMID:25717283

  9. How far can we push quantum variational algorithms without error correction?

    NASA Astrophysics Data System (ADS)

    Babbush, Ryan

    Recent work has shown that parameterized short quantum circuits can generate powerful variational ansatze for ground states of classically intractable fermionic models. This talk will present numerical and experimental evidence that quantum variational algorithms are also robust to certain errors which plague the gate model. As the number of qubits in superconducting devices keeps increasing, their dynamics are becoming prohibitively expensive to simulate classically. Accordingly, our observations should inspire hope that quantum computers could provide useful insight into important problems in the near future. This talk will conclude by discussing future research directions which could elucidate the viability of executing quantum variational algorithms on classically intractable problems without error correction.

  10. A correction algorithm for particle size distribution measurements made with the forward-scattering spectrometer probe

    NASA Technical Reports Server (NTRS)

    Lock, James A.; Hovenac, Edward A.

    1989-01-01

    A correction algorithm for evaluating the particle size distribution measurements of atmospheric aerosols obtained with a forward-scattering spectrometer probe (FSSP) is examined. A model based on Poisson statistics is employed to calculate the average diameter and rms width of the particle size distribution. The dead time and coincidence errors in the measured number density are estimated. The model generated data are compared with a Monte Carlo simulation of the FSSP operation. It is observed that the correlation between the actual and measured size distribution is nonlinear. It is noted that the algorithm permits more accurate calculation of the average diameter and rms width of the distribution compared to uncorrected measured quantities.

  11. Empirical evaluation of bias field correction algorithms for computer-aided detection of prostate cancer on T2w MRI

    NASA Astrophysics Data System (ADS)

    Viswanath, Satish; Palumbo, Daniel; Chappelow, Jonathan; Patel, Pratik; Bloch, B. Nicholas; Rofsky, Neil; Lenkinski, Robert; Genega, Elizabeth; Madabhushi, Anant

    2011-03-01

    In magnetic resonance imaging (MRI), intensity inhomogeneity refers to an acquisition artifact which introduces a non-linear variation in the signal intensities within the image. Intensity inhomogeneity is known to significantly affect computerized analysis of MRI data (such as automated segmentation or classification procedures), hence requiring the application of bias field correction (BFC) algorithms to account for this artifact. Quantitative evaluation of BFC schemes is typically performed using generalized intensity-based measures (percent coefficient of variation, %CV ) or information-theoretic measures (entropy). While some investigators have previously empirically compared BFC schemes in the context of different domains (using changes in %CV and entropy to quantify improvements), no consensus has emerged as to the best BFC scheme for any given application. The motivation for this work is that the choice of a BFC scheme for a given application should be dictated by application-specific measures rather than ad hoc measures such as entropy and %CV. In this paper, we have attempted to address the problem of determining an optimal BFC algorithm in the context of a computer-aided diagnosis (CAD) scheme for prostate cancer (CaP) detection from T2-weighted (T2w) MRI. One goal of this work is to identify a BFC algorithm that will maximize the CaP classification accuracy (measured in terms of the area under the ROC curve or AUC). A secondary aim of our work is to determine whether measures such as %CV and entropy are correlated with a classifier-based objective measure (AUC). Determining the presence or absence of these correlations is important to understand whether domain independent BFC performance measures such as %CV , entropy should be used to identify the optimal BFC scheme for any given application. In order to answer these questions, we quantitatively compared 3 different popular BFC algorithms on a cohort of 10 clinical 3 Tesla prostate T2w MRI datasets

  12. Performance evaluation of operational atmospheric correction algorithms over the East China Seas

    NASA Astrophysics Data System (ADS)

    He, Shuangyan; He, Mingxia; Fischer, Jürgen

    2016-04-01

    To acquire high-quality operational data products for Chinese in-orbit and scheduled ocean color sensors, the performances of two operational atmospheric correction (AC) algorithms (ESA MEGS 7.4.1 and NASA SeaDAS 6.1) were evaluated over the East China Seas (ECS) using MERIS data. The spectral remote sensing reflectance R rs(λ), aerosol optical thickness (AOT), and Ångström exponent (α) retrieved using the two algorithms were validated using in situ measurements obtained between May 2002 and October 2009. Match-ups of R rs, AOT, and α between the in situ and MERIS data were obtained through strict exclusion criteria. Statistical analysis of R rs(λ) showed a mean percentage difference (MPD) of 9%-13% in the 490-560 nm spectral range, and significant overestimation was observed at 413 nm (MPD>72%). The AOTs were overestimated (MPD>32%), and although the ESA algorithm outperformed the NASA algorithm in the blue-green bands, the situation was reversed in the red-near-infrared bands. The value of α was obviously underestimated by the ESA algorithm (MPD=41%) but not by the NASA algorithm (MPD=35%). To clarify why the NASA algorithm performed better in the retrieval of α, scatter plots of the α single scattering albedo (SSA) density were prepared. These α-SSA density scatter plots showed that the applicability of the aerosol models used by the NASA algorithm over the ECS is better than that used by the ESA algorithm, although neither aerosol model is suitable for the ECS region. The results of this study provide a reference to both data users and data agencies regarding the use of operational data products and the investigation into the improvement of current AC schemes over the ECS.

  13. A simplified algorithm for correcting both errors and erasures of R-S codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1978-01-01

    Using the finite field transform and continued fractions, a simplified algorithm for decoding Reed-Solomon (R-S) codes is developed to correct erasures caused by other codes as well as errors over the finite field GF (q(m), where q is a prime and m is an integer. Such an R-S decoder can be faster and simpler than a decoder that uses more conventional methods.

  14. Simplified algorithm for correcting both errors and erasures of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Miller, R. L.

    1979-01-01

    Using a finite-field transform, a simplified algorithm for decoding Reed-Solomon codes is developed to correct erasures as well as errors over the finite-field GF(q to the m power), where q is a prime and m is an integer. If the finite-field transform is a fast transform, this decoder can be faster and simpler than a decoder that uses more conventional methods.

  15. Evaluation and Analysis of Seasat a Scanning Multichannel Microwave Radiometer (SMMR) Antenna Pattern Correction (APC) Algorithm

    NASA Technical Reports Server (NTRS)

    Kitzis, S. N.; Kitzis, J. L.

    1979-01-01

    The accuracy of the SEASAT-A SMMR antenna pattern correction (APC) algorithm was assessed. Interim APC brightness temperature measurements for the SMMR 6.6 GHz channels are compared with surface truth derived sea surface temperatures. Plots and associated statistics are presented for SEASAT-A SMMR data acquired for the Gulf of Alaska experiment. The cross-track gradients observed in the 6.6 GHz brightness temperature data are discussed.

  16. [A quickly atmospheric correction method for HJ-1 CCD with deep blue algorithm].

    PubMed

    Wang, Zhong-Ting; Wang, Hong-Mei; Li, Qing; Zhao, Shao-Hua; Li, Shen-Shen; Chen, Liang-Fu

    2014-03-01

    In the present, for the characteristic of HJ-1 CCD camera, after receiving aerosol optical depth (AOD) from deep blue algorithm which was developed by Hsu et al. assisted by MODerate-resolution imaging spectroradiometer (MODIS) surface reflectance database, bidirectional reflectance distribution function (BRDF) correction with Kernel-Driven Model, and the calculation of viewing geometry with auxiliary data, a new atmospheric correction method of HJ-1 CCD was developed which can be used over vegetation, soil and so on. And, when the CCD data is processed to correct atmospheric influence, with look up table (LUT) and bilinear interpolation, atmospheric correction of HJ-1 CCD is completed quickly by grid calculation of atmospheric parameters and matrix operations of interface define language (IDL). The experiment over China North Plain on July 3rd, 2012 shows that by our method, the atmospheric influence was corrected well and quickly (one CCD image of 1 GB can be corrected in eight minutes), and the reflectance after correction over vegetation and soil was close to the spectrum of vegetation and soil. The comparison with MODIS reflectance product shows that for the advantage of high resolution, the corrected reflectance image of HJ-1 is finer than that of MODIS, and the correlation coefficient of the reflectance over typical surface is greater than 0.9. Error analysis shows that the recognition error of aerosol type leads to 0. 05 absolute error of surface reflectance in near infrared band, which is larger than that in visual bands, and the 0. 02 error of reflectance database leads to 0.01 absolute error of surface reflectance of atmospheric correction in green and red bands. PMID:25208402

  17. Scene-based nonuniformity correction algorithm for MEMS-based un-cooled IR image system

    NASA Astrophysics Data System (ADS)

    Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin; Hui, Mei; Zhou, Xiaoxiao

    2009-08-01

    Almost two years after the investors in Sarcon Microsystems pulled the plug, the micro-cantilever array based uncooled IR detector technology is again attracting more and more attention because of its low cost and high credibility. An uncooled thermal detector array with low NETD is designed and fabricated using MEMS bimaterial microcantilever structures that bend in response to thermal change. The IR images of objects obtained by these FPAs are readout by an optical method. For the IR images, one of the most problems of FPN is complicated by the fact that the response of each FPA detector changes due to a variety of factors, causing the nonuniformity pattern to slowly drift in time. Thus, it is required to remove the nonuniformity. A scene-based nonuniformity correction algorithm was discussed in this paper, against to the traditional calibration-based and other scene-based techniques, which has the better correct performance; better MSE compared with traditional methods can be obtained. Great compute and analysis have been realized by using the discussed algorithm to the simulated data and real infrared scene data respectively. The experimental results demonstrate, the corrected image by this algorithm not only yields highest Peak Signal-to-Noise Ratio values (PSNR), but also achieves best visual quality.

  18. A DSP-based neural network non-uniformity correction algorithm for IRFPA

    NASA Astrophysics Data System (ADS)

    Liu, Chong-liang; Jin, Wei-qi; Cao, Yang; Liu, Xiu

    2009-07-01

    An effective neural network non-uniformity correction (NUC) algorithm based on DSP is proposed in this paper. The non-uniform response in infrared focal plane array (IRFPA) detectors produces corrupted images with a fixed-pattern noise(FPN).We introduced and analyzed the artificial neural network scene-based non-uniformity correction (SBNUC) algorithm. A design of DSP-based NUC development platform for IRFPA is described. The DSP hardware platform designed is of low power consumption, with 32-bit fixed point DSP TMS320DM643 as the kernel processor. The dependability and expansibility of the software have been improved by DSP/BIOS real-time operating system and Reference Framework 5. In order to realize real-time performance, the calibration parameters update is set at a lower task priority then video input and output in DSP/BIOS. In this way, calibration parameters updating will not affect video streams. The work flow of the system and the strategy of real-time realization are introduced. Experiments on real infrared imaging sequences demonstrate that this algorithm requires only a few frames to obtain high quality corrections. It is computationally efficient and suitable for all kinds of non-uniformity.

  19. A new algorithm and results of ionospheric delay correction for satellite-based augmentation system

    NASA Astrophysics Data System (ADS)

    Huang, Z.; Yuan, H.

    Ionospheric delay resulted from radio signals traveling ionosphere is the largest source of errors for single-frequency users of the Global Positioning System GPS In order to improve users position accuracy augmentation systems based on satellite have been developed to provide accurate calibration since the nineties A famous one is Wide Area Augmentation System WAAS which is aimed to the efficiency of navigation over the conterminous United States and has been operating successfully so far The main idea of ionospheric correction algorithm for WAAS is to establish ionospheric grid model i e ionosphere is discretized into a set of regularly-spaced intervals in latitude and longitude at an altitude of 350km above the earth surface The users calculate their pseudoranges by interpolating estimates of vertical ionospheric delay modeled at ionospheric grid points The Chinese crust deformation monitoring network has been established since the eighties and now it is in good operation with 25 permanent GPS stations which provide feasibility to construct similar satellite-based augmentation system SBAS in China For the west region of China the distribution of stations is relatively sparse not to ensure sufficient data If we follow the ionospheric grid correction algorithm some grid points can t obtain their estimate and lost availability Consequently ionospheric correction measurement on the users situated in that region is inestimable which constitute a fatal threat to navigation users In this paper we presented a new algorithm that

  20. Reconstruction algorithm for polychromatic CT imaging: application to beam hardening correction

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    2000-01-01

    This paper presents a new reconstruction algorithm for both single- and dual-energy computed tomography (CT) imaging. By incorporating the polychromatic characteristics of the X-ray beam into the reconstruction process, the algorithm is capable of eliminating beam hardening artifacts. The single energy version of the algorithm assumes that each voxel in the scan field can be expressed as a mixture of two known substances, for example, a mixture of trabecular bone and marrow, or a mixture of fat and flesh. These assumptions are easily satisfied in a quantitative computed tomography (QCT) setting. We have compared our algorithm to three commonly used single-energy correction techniques. Experimental results show that our algorithm is much more robust and accurate. We have also shown that QCT measurements obtained using our algorithm are five times more accurate than that from current QCT systems (using calibration). The dual-energy mode does not require any prior knowledge of the object in the scan field, and can be used to estimate the attenuation coefficient function of unknown materials. We have tested the dual-energy setup to obtain an accurate estimate for the attenuation coefficient function of K2 HPO4 solution.

  1. Minimizing light absorption measurement artifacts of the Aethalometer: evaluation of five correction algorithms

    NASA Astrophysics Data System (ADS)

    Collaud Coen, M.; Weingartner, E.; Apituley, A.; Ceburnis, D.; Flentje, H.; Henzing, J. S.; Jennings, S. G.; Moerman, M.; Petzold, A.; Schmidhauser, R.; Schmid, O.; Baltensperger, U.

    2009-07-01

    The aerosol light absorption coefficient is an essential parameter involved in atmospheric radiation budget calculations. The Aethalometer (AE) has the great advantage of measuring the aerosol light absorption coefficient at several wavelengths, but the derived absorption coefficients are systematically too high when compared to reference methods. Up to now, four different correction algorithms of the AE absorption coefficients have been proposed by several authors. A new correction scheme based on these previously published methods has been developed, which accounts for the optical properties of the aerosol particles embedded in the filter. All the corrections have been tested on six datasets representing different aerosol types and loadings and include multi-wavelength AE and white-light AE. All the corrections have also been evaluated through comparison with a Multi-Angle Absorption Photometer (MAAP) for four datasets lasting between 6 months and five years. The modification of the wavelength dependence by the different corrections is analyzed in detail. The performances and the limits of all AE corrections are determined and recommendations are given.

  2. Minimizing light absorption measurement artifacts of the Aethalometer: evaluation of five correction algorithms

    NASA Astrophysics Data System (ADS)

    Collaud Coen, M.; Weingartner, E.; Apituley, A.; Ceburnis, D.; Fierz-Schmidhauser, R.; Flentje, H.; Henzing, J. S.; Jennings, S. G.; Moerman, M.; Petzold, A.; Schmid, O.; Baltensperger, U.

    2010-04-01

    The aerosol light absorption coefficient is an essential parameter involved in atmospheric radiation budget calculations. The Aethalometer (AE) has the great advantage of measuring the aerosol light absorption coefficient at several wavelengths, but the derived absorption coefficients are systematically too high when compared to reference methods. Up to now, four different correction algorithms of the AE absorption coefficients have been proposed by several authors. A new correction scheme based on these previously published methods has been developed, which accounts for the optical properties of the aerosol particles embedded in the filter. All the corrections have been tested on six datasets representing different aerosol types and loadings and include multi-wavelength AE and white-light AE. All the corrections have also been evaluated through comparison with a Multi-Angle Absorption Photometer (MAAP) for four datasets lasting between 6 months and five years. The modification of the wavelength dependence by the different corrections is analyzed in detail. The performances and the limits of all AE corrections are determined and recommendations are given.

  3. A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong

    2015-11-01

    The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method.

  4. Correcting encoder interpolation error on the Green Bank Telescope using an iterative model based identification algorithm

    NASA Astrophysics Data System (ADS)

    Franke, Timothy; Weadon, Tim; Ford, John; Garcia-Sanz, Mario

    2015-10-01

    Various forms of measurement errors limit telescope tracking performance in practice. A new method for identifying the correcting coefficients for encoder interpolation error is developed. The algorithm corrects the encoder measurement by identifying a harmonic model of the system and using that model to compute the necessary correction parameters. The approach improves upon others by explicitly modeling the unknown dynamics of the structure and controller and by not requiring a separate system identification to be performed. Experience gained from pin-pointing the source of encoder error on the Green Bank Radio Telescope (GBT) is presented. Several tell-tale indicators of encoder error are discussed. Experimental data from the telescope, tested with two different encoders, are presented. Demonstration of the identification methodology on the GBT as well as details of its implementation are discussed. A root mean square tracking error reduction from 0.68 arc seconds to 0.21 arc sec was achieved by changing encoders and was further reduced to 0.10 arc sec with the calibration algorithm. In particular, the ubiquity of this error source is shown and how, by careful correction, it is possible to go beyond the advertised accuracy of an encoder.

  5. Alignment algorithms and per-particle CTF correction for single particle cryo-electron tomography.

    PubMed

    Galaz-Montoya, Jesús G; Hecksel, Corey W; Baldwin, Philip R; Wang, Eryu; Weaver, Scott C; Schmid, Michael F; Ludtke, Steven J; Chiu, Wah

    2016-06-01

    Single particle cryo-electron tomography (cryoSPT) extracts features from cryo-electron tomograms, followed by 3D classification, alignment and averaging to generate improved 3D density maps of such features. Robust methods to correct for the contrast transfer function (CTF) of the electron microscope are necessary for cryoSPT to reach its resolution potential. Many factors can make CTF correction for cryoSPT challenging, such as lack of eucentricity of the specimen stage, inherent low dose per image, specimen charging, beam-induced specimen motions, and defocus gradients resulting both from specimen tilting and from unpredictable ice thickness variations. Current CTF correction methods for cryoET make at least one of the following assumptions: that the defocus at the center of the image is the same across the images of a tiltseries, that the particles all lie at the same Z-height in the embedding ice, and/or that the specimen, the cryo-electron microscopy (cryoEM) grid and/or the carbon support are flat. These experimental conditions are not always met. We have developed a CTF correction algorithm for cryoSPT without making any of the aforementioned assumptions. We also introduce speed and accuracy improvements and a higher degree of automation to the subtomogram averaging algorithms available in EMAN2. Using motion-corrected images of isolated virus particles as a benchmark specimen, recorded with a DE20 direct detection camera, we show that our CTF correction and subtomogram alignment routines can yield subtomogram averages close to 4/5 Nyquist frequency of the detector under our experimental conditions. PMID:27016284

  6. Alignment algorithms and per-particle CTF correction for single particle cryo-electron tomography.

    PubMed

    Galaz-Montoya, Jesús G; Hecksel, Corey W; Baldwin, Philip R; Wang, Eryu; Weaver, Scott C; Schmid, Michael F; Ludtke, Steven J; Chiu, Wah

    2016-06-01

    Single particle cryo-electron tomography (cryoSPT) extracts features from cryo-electron tomograms, followed by 3D classification, alignment and averaging to generate improved 3D density maps of such features. Robust methods to correct for the contrast transfer function (CTF) of the electron microscope are necessary for cryoSPT to reach its resolution potential. Many factors can make CTF correction for cryoSPT challenging, such as lack of eucentricity of the specimen stage, inherent low dose per image, specimen charging, beam-induced specimen motions, and defocus gradients resulting both from specimen tilting and from unpredictable ice thickness variations. Current CTF correction methods for cryoET make at least one of the following assumptions: that the defocus at the center of the image is the same across the images of a tiltseries, that the particles all lie at the same Z-height in the embedding ice, and/or that the specimen, the cryo-electron microscopy (cryoEM) grid and/or the carbon support are flat. These experimental conditions are not always met. We have developed a CTF correction algorithm for cryoSPT without making any of the aforementioned assumptions. We also introduce speed and accuracy improvements and a higher degree of automation to the subtomogram averaging algorithms available in EMAN2. Using motion-corrected images of isolated virus particles as a benchmark specimen, recorded with a DE20 direct detection camera, we show that our CTF correction and subtomogram alignment routines can yield subtomogram averages close to 4/5 Nyquist frequency of the detector under our experimental conditions.

  7. A multi-characteristic based algorithm for classifying vegetation in a plateau area: Qinghai Lake watershed, northwestern China

    NASA Astrophysics Data System (ADS)

    Ma, Weiwei; Gong, Cailan; Hu, Yong; Li, Long; Meng, Peng

    2015-10-01

    Remote sensing technology has been broadly recognized for its convenience and efficiency in mapping vegetation, particularly in high-altitude and inaccessible areas where there are lack of in-situ observations. In this study, Landsat Thematic Mapper (TM) images and Chinese environmental mitigation satellite CCD sensor (HJ-1 CCD) images, both of which are at 30m spatial resolution were employed for identifying and monitoring of vegetation types in a area of Western China——Qinghai Lake Watershed(QHLW). A decision classification tree (DCT) algorithm using multi-characteristic including seasonal TM/HJ-1 CCD time series data combined with digital elevation models (DEMs) dataset, and a supervised maximum likelihood classification (MLC) algorithm with single-data TM image were applied vegetation classification. Accuracy of the two algorithms was assessed using field observation data. Based on produced vegetation classification maps, it was found that the DCT using multi-season data and geomorphologic parameters was superior to the MLC algorithm using single-data image, improving the overall accuracy by 11.86% at second class level and significantly reducing the "salt and pepper" noise. The DCT algorithm applied to TM /HJ-1 CCD time series data geomorphologic parameters appeared as a valuable and reliable tool for monitoring vegetation at first class level (5 vegetation classes) and second class level(8 vegetation subclasses). The DCT algorithm using multi-characteristic might provide a theoretical basis and general approach to automatic extraction of vegetation types from remote sensing imagery over plateau areas.

  8. A real-time misalignment correction algorithm for stereoscopic 3D cameras

    NASA Astrophysics Data System (ADS)

    Pekkucuksen, Ibrahim E.; Batur, Aziz Umit; Zhang, Buyue

    2012-03-01

    Camera calibration is an important problem for stereo 3-D cameras since the misalignment between the two views can lead to vertical disparities that significantly degrade 3-D viewing quality. Offline calibration during manufacturing is not always an option especially for mass produced cameras due to cost. In addition, even if one-time calibration is performed during manufacturing, its accuracy cannot be maintained indefinitely because environmental factors can lead to changes in camera hardware. In this paper, we propose a real-time stereo calibration solution that runs inside a consumer camera and continuously estimates and corrects for the misalignment between the stereo cameras. Our algorithm works by processing images of natural scenes and does not require the use of special calibration charts. The algorithm first estimates the disparity in horizontal and vertical directions between the corresponding blocks from stereo images. Then, this initial estimate is refined with two dimensional search using smaller sub-blocks. The displacement data and block coordinates are fed to a modified affine transformation model and outliers are discarded to keep the modeling error low. Finally, the estimated affine parameters are split by half and misalignment correction is applied to each view accordingly. The proposed algorithm significantly reduces the misalignment between stereo frames and enables a more comfortable 3-D viewing experience.

  9. A robust background correction algorithm for forensic bloodstain imaging using mean-based contrast adjustment.

    PubMed

    Lee, Wee Chuen; Khoo, Bee Ee; Abdullah, Ahmad Fahmi Lim

    2016-05-01

    Background correction algorithm (BCA) is useful in enhancing the visibility of images captured in crime scenes especially those of untreated bloodstains. Successful implementation of BCA requires all the images to have similar brightness which often proves a problem when using automatic exposure setting in a camera. This paper presents an improved background correction algorithm (BCA) that applies mean-based contrast adjustment as a pre-correction step to adjust the mean brightness of images to be similar before implementing BCA. The proposed modification, namely mean-based adaptive BCA (mABCA) was tested on various image samples captured under different illuminations such as 385 nm, 415 nm and 458 nm. We also evaluated mABCA of two wavelengths (415 nm and 458 nm) and three wavelengths (415 nm, 380 nm and 458 nm) in enhancing untreated bloodstains on different surfaces. The proposed mABCA is found to be more robust in processing images captured in different brightness and thus overcomes the main issue faced in the original BCA. PMID:27162018

  10. A robust background correction algorithm for forensic bloodstain imaging using mean-based contrast adjustment.

    PubMed

    Lee, Wee Chuen; Khoo, Bee Ee; Abdullah, Ahmad Fahmi Lim

    2016-05-01

    Background correction algorithm (BCA) is useful in enhancing the visibility of images captured in crime scenes especially those of untreated bloodstains. Successful implementation of BCA requires all the images to have similar brightness which often proves a problem when using automatic exposure setting in a camera. This paper presents an improved background correction algorithm (BCA) that applies mean-based contrast adjustment as a pre-correction step to adjust the mean brightness of images to be similar before implementing BCA. The proposed modification, namely mean-based adaptive BCA (mABCA) was tested on various image samples captured under different illuminations such as 385 nm, 415 nm and 458 nm. We also evaluated mABCA of two wavelengths (415 nm and 458 nm) and three wavelengths (415 nm, 380 nm and 458 nm) in enhancing untreated bloodstains on different surfaces. The proposed mABCA is found to be more robust in processing images captured in different brightness and thus overcomes the main issue faced in the original BCA.

  11. Multimedia Classifier

    NASA Astrophysics Data System (ADS)

    Costache, G. N.; Gavat, I.

    2004-09-01

    Along with the aggressive growing of the amount of digital data available (text, audio samples, digital photos and digital movies joined all in the multimedia domain) the need for classification, recognition and retrieval of this kind of data became very important. In this paper will be presented a system structure to handle multimedia data based on a recognition perspective. The main processing steps realized for the interesting multimedia objects are: first, the parameterization, by analysis, in order to obtain a description based on features, forming the parameter vector; second, a classification, generally with a hierarchical structure to make the necessary decisions. For audio signals, both speech and music, the derived perceptual features are the melcepstral (MFCC) and the perceptual linear predictive (PLP) coefficients. For images, the derived features are the geometric parameters of the speaker mouth. The hierarchical classifier consists generally in a clustering stage, based on the Kohonnen Self-Organizing Maps (SOM) and a final stage, based on a powerful classification algorithm called Support Vector Machines (SVM). The system, in specific variants, is applied with good results in two tasks: the first, is a bimodal speech recognition which uses features obtained from speech signal fused to features obtained from speaker's image and the second is a music retrieval from large music database.

  12. Optimized Seizure Detection Algorithm: A Fast Approach for Onset of Epileptic in EEG Signals Using GT Discriminant Analysis and K-NN Classifier

    PubMed Central

    Rezaee, Kh.; Azizi, E.; Haddadnia, J.

    2016-01-01

    Background Epilepsy is a severe disorder of the central nervous system that predisposes the person to recurrent seizures. Fifty million people worldwide suffer from epilepsy; after Alzheimer’s and stroke, it is the third widespread nervous disorder. Objective In this paper, an algorithm to detect the onset of epileptic seizures based on the analysis of brain electrical signals (EEG) has been proposed. 844 hours of EEG were recorded form 23 pediatric patients consecutively with 163 occurrences of seizures. Signals had been collected from Children’s Hospital Boston with a sampling frequency of 256 Hz through 18 channels in order to assess epilepsy surgery. By selecting effective features from seizure and non-seizure signals of each individual and putting them into two categories, the proposed algorithm detects the onset of seizures quickly and with high sensitivity. Method In this algorithm, L-sec epochs of signals are displayed in form of a third-order tensor in spatial, spectral and temporal spaces by applying wavelet transform. Then, after applying general tensor discriminant analysis (GTDA) on tensors and calculating mapping matrix, feature vectors are extracted. GTDA increases the sensitivity of the algorithm by storing data without deleting them. Finally, K-Nearest neighbors (KNN) is used to classify the selected features. Results The results of simulating algorithm on algorithm standard dataset shows that the algorithm is capable of detecting 98 percent of seizures with an average delay of 4.7 seconds and the average error rate detection of three errors in 24 hours. Conclusion Today, the lack of an automated system to detect or predict the seizure onset is strongly felt.

  13. Optimized Seizure Detection Algorithm: A Fast Approach for Onset of Epileptic in EEG Signals Using GT Discriminant Analysis and K-NN Classifier

    PubMed Central

    Rezaee, Kh.; Azizi, E.; Haddadnia, J.

    2016-01-01

    Background Epilepsy is a severe disorder of the central nervous system that predisposes the person to recurrent seizures. Fifty million people worldwide suffer from epilepsy; after Alzheimer’s and stroke, it is the third widespread nervous disorder. Objective In this paper, an algorithm to detect the onset of epileptic seizures based on the analysis of brain electrical signals (EEG) has been proposed. 844 hours of EEG were recorded form 23 pediatric patients consecutively with 163 occurrences of seizures. Signals had been collected from Children’s Hospital Boston with a sampling frequency of 256 Hz through 18 channels in order to assess epilepsy surgery. By selecting effective features from seizure and non-seizure signals of each individual and putting them into two categories, the proposed algorithm detects the onset of seizures quickly and with high sensitivity. Method In this algorithm, L-sec epochs of signals are displayed in form of a third-order tensor in spatial, spectral and temporal spaces by applying wavelet transform. Then, after applying general tensor discriminant analysis (GTDA) on tensors and calculating mapping matrix, feature vectors are extracted. GTDA increases the sensitivity of the algorithm by storing data without deleting them. Finally, K-Nearest neighbors (KNN) is used to classify the selected features. Results The results of simulating algorithm on algorithm standard dataset shows that the algorithm is capable of detecting 98 percent of seizures with an average delay of 4.7 seconds and the average error rate detection of three errors in 24 hours. Conclusion Today, the lack of an automated system to detect or predict the seizure onset is strongly felt. PMID:27672628

  14. Algorithm for Atmospheric and Glint Corrections of Satellite Measurements of Ocean Pigment

    NASA Technical Reports Server (NTRS)

    Fraser, Robert S.; Mattoo, Shana; Yeh, Eueng-Nan; McClain, C. R.

    1997-01-01

    An algorithm is developed to correct satellite measurements of ocean color for atmospheric and surface reflection effects. The algorithm depends on taking the difference between measured and tabulated radiances for deriving water-leaving radiances. 'ne tabulated radiances are related to the measured radiance where the water-leaving radiance is negligible (670 nm). The tabulated radiances are calculated for rough surface reflection, polarization of the scattered light, and multiple scattering. The accuracy of the tables is discussed. The method is validated by simulating the effect of different wind speeds than that for which the lookup table is calculated, and aerosol models different from the maritime model for which the table is computed. The derived water-leaving radiances are accurate enough to compute the pigment concentration with an error of less than q 15% for wind speeds of 6 and 10 m/s and an urban atmosphere with aerosol optical thickness of 0.20 at lambda 443 nm and decreasing to 0.10 at lambda 670 nm. The pigment accuracy is less for wind speeds less than 6 m/s and is about 30% for a model with aeolian dust. On the other hand, in a preliminary comparison with coastal zone color scanner (CZCS) measurements this algorithm and the CZCS operational algorithm produced values of pigment concentration in one image that agreed closely.

  15. An improved atmospheric correction algorithm for applying MERIS data to very turbid inland waters

    NASA Astrophysics Data System (ADS)

    Jaelani, Lalu Muhamad; Matsushita, Bunkei; Yang, Wei; Fukushima, Takehiko

    2015-07-01

    Atmospheric correction (AC) is a necessary process when quantitatively monitoring water quality parameters from satellite data. However, it is still a major challenge to carry out AC for turbid coastal and inland waters. In this study, we propose an improved AC algorithm named N-GWI (new standard Gordon and Wang's algorithms with an iterative process and a bio-optical model) for applying MERIS data to very turbid inland waters (i.e., waters with a water-leaving reflectance at 864.8 nm between 0.001 and 0.01). The N-GWI algorithm incorporates three improvements to avoid certain invalid assumptions that limit the applicability of the existing algorithms in very turbid inland waters. First, the N-GWI uses a fixed aerosol type (coastal aerosol) but permits aerosol concentration to vary at each pixel; this improvement omits a complicated requirement for aerosol model selection based only on satellite data. Second, it shifts the reference band from 670 nm to 754 nm to validate the assumption that the total absorption coefficient at the reference band can be replaced by that of pure water, and thus can avoid the uncorrected estimation of the total absorption coefficient at the reference band in very turbid waters. Third, the N-GWI generates a semi-analytical relationship instead of an empirical one for estimation of the spectral slope of particle backscattering. Our analysis showed that the N-GWI improved the accuracy of atmospheric correction in two very turbid Asian lakes (Lake Kasumigaura, Japan and Lake Dianchi, China), with a normalized mean absolute error (NMAE) of less than 22% for wavelengths longer than 620 nm. However, the N-GWI exhibited poor performance in moderately turbid waters (the NMAE values were larger than 83.6% in the four American coastal waters). The applicability of the N-GWI, which includes both advantages and limitations, was discussed.

  16. Retrieval of atmospheric properties from hyper and multispectral imagery with the FLAASH atmospheric correction algorithm

    NASA Astrophysics Data System (ADS)

    Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael; Berk, Alexander; Anderson, Gail; Gardner, James; Felde, Gerald

    2005-10-01

    Atmospheric Correction Algorithms (ACAs) are used in applications of remotely sensed Hyperspectral and Multispectral Imagery (HSI/MSI) to correct for atmospheric effects on measurements acquired by air and space-borne systems. The Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm is a forward-model based ACA created for HSI and MSI instruments which operate in the visible through shortwave infrared (Vis-SWIR) spectral regime. Designed as a general-purpose, physics-based code for inverting at-sensor radiance measurements into surface reflectance, FLAASH provides a collection of spectral analysis and atmospheric retrieval methods including: a per-pixel vertical water vapor column estimate, determination of aerosol optical depth, estimation of scattering for compensation of adjacency effects, detection/characterization of clouds, and smoothing of spectral structure resulting from an imperfect atmospheric correction. To further improve the accuracy of the atmospheric correction process, FLAASH will also detect and compensate for sensor-introduced artifacts such as optical smile and wavelength mis-calibration. FLAASH relies on the MODTRANTM radiative transfer (RT) code as the physical basis behind its mathematical formulation, and has been developed in parallel with upgrades to MODTRAN in order to take advantage of the latest improvements in speed and accuracy. For example, the rapid, high fidelity multiple scattering (MS) option available in MODTRAN4 can greatly improve the accuracy of atmospheric retrievals over the 2-stream approximation. In this paper, advanced features available in FLAASH are described, including the principles and methods used to derive atmospheric parameters from HSI and MSI data. Results are presented from processing of Hyperion, AVIRIS, and LANDSAT data.

  17. Parallel algorithms of relative radiometric correction for images of TH-1 satellite

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Zhang, Tingtao; Cheng, Jiasheng; Yang, Tao

    2014-05-01

    The first generation of transitive stereo-metric satellites in China, TH-1 Satellite, is able to gain stereo images of three-line-array with resolution of 5 meters, multispectral images of 10 meters, and panchromatic high resolution images of 2 meters. The procedure between level 0 and level 1A of high resolution images is so called relative radiometric correction (RRC for short). The processing algorithm of high resolution images, with large volumes of data, is complicated and time consuming. In order to bring up the processing speed, people in industry commonly apply parallel processing techniques based on CPU or GPU. This article firstly introduces the whole process and each step of the algorithm - that is in application - of RRC for high resolution images in level 0; secondly, the theory and characteristics of MPI (Message Passing Interface) and OpenMP (Open Multi-Processing) parallel programming techniques is briefly described, as well as the superiority for parallel technique in image processing field; thirdly, aiming at each step of the algorithm in application and based on MPI+OpenMP hybrid paradigm, the parallelizability and the strategies of parallelism for three processing steps: Radiometric Correction, Splicing Pieces of TDICCD (Time Delay Integration Charge-Coupled Device) and Gray Level Adjustment among pieces of TDICCD are deeply discussed, and furthermore, deducts the theoretical acceleration rates of each step and the one of whole procedure, according to the processing styles and independence of calculation; for the step Splicing Pieces of TDICCD, two different strategies of parallelism are proposed, which are to be chosen with consideration of hardware capabilities; finally, series of experiments are carried out to verify the parallel algorithms by applying 2-meter panchromatic high resolution images of TH-1 Satellite, and the experimental results are analyzed. Strictly on the basis of former parallel algorithms, the programs in the experiments

  18. Direct cone-beam cardiac reconstruction algorithm with cardiac banding artifact correction

    SciTech Connect

    Taguchi, Katsuyuki; Chiang, Beshan S.; Hein, Ilmar A.

    2006-02-15

    Multislice helical computed tomography (CT) is a promising noninvasive technique for coronary artery imaging. Various factors can cause inconsistencies in cardiac CT data, which can result in degraded image quality. These inconsistencies may be the result of the patient physiology (e.g., heart rate variations), the nature of the data (e.g., cone-angle), or the reconstruction algorithm itself. An algorithm which provides the best temporal resolution for each slice, for example, often provides suboptimal image quality for the entire volume since the cardiac temporal resolution (TRc) changes from slice to slice. Such variations in TRc can generate strong banding artifacts in multi-planar reconstruction images or three-dimensional images. Discontinuous heart walls and coronary arteries may compromise the accuracy of the diagnosis. A {beta}-blocker is often used to reduce and stabilize patients' heart rate but cannot eliminate the variation. In order to obtain robust and optimal image quality, a software solution that increases the temporal resolution and decreases the effect of heart rate is highly desirable. This paper proposes an ECG-correlated direct cone-beam reconstruction algorithm (TCOT-EGR) with cardiac banding artifact correction (CBC) and disconnected projections redundancy compensation technique (DIRECT). First the theory and analytical model of the cardiac temporal resolution is outlined. Next, the performance of the proposed algorithms is evaluated by using computer simulations as well as patient data. It will be shown that the proposed algorithms enhance the robustness of the image quality against inconsistencies by guaranteeing smooth transition of heart cycles used in reconstruction.

  19. A background correction algorithm for Van Allen Probes MagEIS electron flux measurements

    SciTech Connect

    Claudepierre, S. G.; O'Brien, T. P.; Blake, J. B.; Fennell, J. F.; Roeder, J. L.; Clemmons, J. H.; Looper, M. D.; Mazur, J. E.; Mulligan, T. M.; Spence, H. E.; Reeves, G. D.; Friedel, R. H. W.; Henderson, M. G.; Larsen, B. A.

    2015-07-14

    We describe an automated computer algorithm designed to remove background contamination from the Van Allen Probes Magnetic Electron Ion Spectrometer (MagEIS) electron flux measurements. We provide a detailed description of the algorithm with illustrative examples from on-orbit data. We find two primary sources of background contamination in the MagEIS electron data: inner zone protons and bremsstrahlung X-rays generated by energetic electrons interacting with the spacecraft material. Bremsstrahlung X-rays primarily produce contamination in the lower energy MagEIS electron channels (~30–500 keV) and in regions of geospace where multi-M eV electrons are present. Inner zone protons produce contamination in all MagEIS energy channels at roughly L < 2.5. The background-corrected MagEIS electron data produce a more accurate measurement of the electron radiation belts, as most earlier measurements suffer from unquantifiable and uncorrectable contamination in this harsh region of the near-Earth space environment. These background-corrected data will also be useful for spacecraft engineering purposes, providing ground truth for the near-Earth electron environment and informing the next generation of spacecraft design models (e.g., AE9).

  20. A background correction algorithm for Van Allen Probes MagEIS electron flux measurements

    DOE PAGES

    Claudepierre, S. G.; O'Brien, T. P.; Blake, J. B.; Fennell, J. F.; Roeder, J. L.; Clemmons, J. H.; Looper, M. D.; Mazur, J. E.; Mulligan, T. M.; Spence, H. E.; et al

    2015-07-14

    We describe an automated computer algorithm designed to remove background contamination from the Van Allen Probes Magnetic Electron Ion Spectrometer (MagEIS) electron flux measurements. We provide a detailed description of the algorithm with illustrative examples from on-orbit data. We find two primary sources of background contamination in the MagEIS electron data: inner zone protons and bremsstrahlung X-rays generated by energetic electrons interacting with the spacecraft material. Bremsstrahlung X-rays primarily produce contamination in the lower energy MagEIS electron channels (~30–500 keV) and in regions of geospace where multi-M eV electrons are present. Inner zone protons produce contamination in all MagEIS energymore » channels at roughly L < 2.5. The background-corrected MagEIS electron data produce a more accurate measurement of the electron radiation belts, as most earlier measurements suffer from unquantifiable and uncorrectable contamination in this harsh region of the near-Earth space environment. These background-corrected data will also be useful for spacecraft engineering purposes, providing ground truth for the near-Earth electron environment and informing the next generation of spacecraft design models (e.g., AE9).« less

  1. Correction.

    PubMed

    2015-11-01

    In the article by Heuslein et al, which published online ahead of print on September 3, 2015 (DOI: 10.1161/ATVBAHA.115.305775), a correction was needed. Brett R. Blackman was added as the penultimate author of the article. The article has been corrected for publication in the November 2015 issue. PMID:26490278

  2. Algorithms for calculating mass-velocity and Darwin relativistic corrections with n-electron explicitly correlated Gaussians with shifted centers

    NASA Astrophysics Data System (ADS)

    Stanke, Monika; Palikot, Ewa; Adamowicz, Ludwik

    2016-05-01

    Algorithms for calculating the leading mass-velocity (MV) and Darwin (D) relativistic corrections are derived for electronic wave functions expanded in terms of n-electron explicitly correlated Gaussian functions with shifted centers and without pre-exponential angular factors. The algorithms are implemented and tested in calculations of MV and D corrections for several points on the ground-state potential energy curves of the H2 and LiH molecules. The algorithms are general and can be applied in calculations of systems with an arbitrary number of electrons.

  3. a New Control Points Based Geometric Correction Algorithm for Airborne Push Broom Scanner Images Without On-Board Data

    NASA Astrophysics Data System (ADS)

    Strakhov, P.; Badasen, E.; Shurygin, B.; Kondranin, T.

    2016-06-01

    Push broom scanners, such as video spectrometers (also called hyperspectral sensors), are widely used in the present. Usage of scanned images requires accurate geometric correction, which becomes complicated when imaging platform is airborne. This work contains detailed description of a new algorithm developed for processing of such images. The algorithm requires only user provided control points and is able to correct distortions caused by yaw, flight speed and height changes. It was tested on two series of airborne images and yielded RMS error values on the order of 7 meters (3-6 source image pixels) as compared to 13 meters for polynomial-based correction.

  4. Algorithms Based on CWT and Classifiers to Control Cardiac Alterations and Stress Using an ECG and a SCR

    PubMed Central

    Villarejo, María Viqueira; Zapirain, Begoña García; Zorrilla, Amaia Méndez

    2013-01-01

    This paper presents the results of using a commercial pulsimeter as an electrocardiogram (ECG) for wireless detection of cardiac alterations and stress levels for home control. For these purposes, signal processing techniques (Continuous Wavelet Transform (CWT) and J48) have been used, respectively. The designed algorithm analyses the ECG signal and is able to detect the heart rate (99.42%), arrhythmia (93.48%) and extrasystoles (99.29%). The detection of stress level is complemented with Skin Conductance Response (SCR), whose success is 94.02%. The heart rate variability does not show added value to the stress detection in this case. With this pulsimeter, it is possible to prevent and detect anomalies for a non-intrusive way associated to a telemedicine system. It is also possible to use it during physical activity due to the fact the CWT minimizes the motion artifacts. PMID:23666135

  5. Algorithms based on CWT and classifiers to control cardiac alterations and stress using an ECG and a SCR.

    PubMed

    Villarejo, María Viqueira; Zapirain, Begoña García; Zorrilla, Amaia Méndez

    2013-05-10

    This paper presents the results of using a commercial pulsimeter as an electrocardiogram (ECG) for wireless detection of cardiac alterations and stress levels for home control. For these purposes, signal processing techniques (Continuous Wavelet Transform (CWT) and J48) have been used, respectively. The designed algorithm analyses the ECG signal and is able to detect the heart rate (99.42%), arrhythmia (93.48%) and extrasystoles (99.29%). The detection of stress level is complemented with Skin Conductance Response (SCR), whose success is 94.02%. The heart rate variability does not show added value to the stress detection in this case. With this pulsimeter, it is possible to prevent and detect anomalies for a non-intrusive way associated to a telemedicine system. It is also possible to use it during physical activity due to the fact the CWT minimizes the motion artifacts.

  6. Image-based EPI ghost correction using an algorithm based on projection onto convex sets (POCS).

    PubMed

    Lee, K J; Barber, D C; Paley, M N; Wilkinson, I D; Papadakis, N G; Griffiths, P D

    2002-04-01

    This work describes the use of a method, based on the projection onto convex sets (POCS) algorithm, for reduction of the N/2 ghost in echo-planar imaging (EPI). In this method, ghosts outside the parent image are set to zero and a model k-space is obtained from the Fourier transform (FT) of the resulting image. The zeroth- and first-order phase corrections for each line of the original k-space are estimated by comparison with the corresponding line in the model k-space. To overcome problems of phase wrapping, the first-order phase corrections for the lines of the original k-space are estimated by registration with the corresponding lines in the model k-space. It is shown that applying these corrections will result in a reduction of the ghost, and that iterating the process will result in a convergence towards an image in which the ghost is minimized. The method is tested on spin-echo EPI data. The results show that the method is robust and remarkably effective, reducing the N/2 ghost to a level nearly comparable to that achieved with reference scans.

  7. An automatic stain removal algorithm of series aerial photograph based on flat-field correction

    NASA Astrophysics Data System (ADS)

    Wang, Gang; Yan, Dongmei; Yang, Yang

    2010-10-01

    The dust on the camera's lens will leave dark stains on the image. Calibrating and compensating the intensity of the stained pixels play an important role in the airborne image processing. This article introduces an automatic compensation algorithm for the dark stains. It's based on the theory of flat-field correction. We produced a whiteboard reference image by aggregating hundreds of images recorded in one flight and use their average pixel values to simulate the uniform white light irradiation. Then we constructed a look-up table function based on this whiteboard image to calibrate the stained image. The experiment result shows that the proposed procedure can remove lens stains effectively and automatically.

  8. A Local Corrections Algorithm for Solving Poisson's Equation inThree Dimensions

    SciTech Connect

    McCorquodale, Peter; Colella, Phillip; Balls, Gregory T.; Baden,Scott B.

    2006-10-30

    We present a second-order accurate algorithm for solving thefree-space Poisson's equation on a locally-refined nested grid hierarchyin three dimensions. Our approach is based on linear superposition oflocal convolutions of localized charge distributions, with the nonlocalcoupling represented on coarser grids. There presentation of the nonlocalcoupling on the local solutions is based on Anderson's Method of LocalCorrections and does not require iteration between different resolutions.A distributed-memory parallel implementation of this method is observedto have a computational cost per grid point less than three times that ofa standard FFT-based method on a uniform grid of the same resolution, andscales well up to 1024 processors.

  9. A procedure for testing the quality of LANDSAT atmospheric correction algorithms

    NASA Technical Reports Server (NTRS)

    Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.

    1982-01-01

    There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.

  10. Accuracy of inhomogeneity correction algorithm in intensity-modulated radiotherapy of head-and-neck tumors

    SciTech Connect

    Yoon, Myonggeun; Lee, Doo-Hyun; Shin, Dongho; Lee, Se Byeong; Park, Sung Yong . E-mail: cool_park@ncc.re.kr; Cho, Kwan Ho

    2007-04-01

    We examined the degree of calculated-to-measured dose difference for nasopharyngeal target volume in intensity-modulated radiotherapy (IMRT) based on the observed/expected ratio using patient anatomy with humanoid head-and-neck phantom. The plans were designed with a clinical treatment planning system that uses a measurement-based pencil beam dose-calculation algorithm. Two kinds of IMRT plans, which give a direct indication of the error introduced in routine treatment planning, were categorized and evaluated. The experimental results show that when the beams pass through the oral cavity in anthropomorphic head-and-neck phantom, the average dose difference becomes significant, revealing about 10% dose difference to prescribed dose at isocenter. To investigate both the physical reasons of the dose discrepancy and the inhomogeneity effect, we performed the 10 cases of IMRT quality assurance (QA) with plastic and humanoid phantoms. Our result suggests that the transient electronic disequilibrium with the increased lateral electron range may cause the inaccuracy of dose calculation algorithm, and the effectiveness of the inhomogeneity corrections used in IMRT plans should be evaluated to ensure meaningful quality assurance and delivery.

  11. A novel image-based motion correction algorithm on ultrasonic image

    NASA Astrophysics Data System (ADS)

    Wang, Xuan; Li, Yaqin; Li, Shigao

    2015-12-01

    Lung respiratory movement can cause errors in the operation of image navigation surgery and they are the main errors in the navigation system. To solve this problem, the image-based motion correction strategy should be proposed to quickly correct the respiratory motion in the image sequence. So, the commercial ultrasound machine can display contrast and tissue images simultaneously. In the paper, a convenient, simple and easy-to-use breathing model whose precision was close to the sub-voxel was proposed. The first, in the clinical case the low gray-level variation in the tissue images, motion parameters were first calculated according to the actual lung movement information of each point the tissue images are registered by using template matching with sum of absolute differences metric. Finally, the similar images are selected by a double-selection method which requires global and local threshold setting. The generic breathing model was constructed based on all the sample data. The results of experiments show the algorithm can reduce the original errors caused by breath movement heavily.

  12. Intensity Inhomogeneity Correction of Structural MR Images: A Data-Driven Approach to Define Input Algorithm Parameters

    PubMed Central

    Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante

    2016-01-01

    Intensity non-uniformity (INU) in magnetic resonance (MR) imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CVWM), the coefficient of variation of gray matter (CVGM), and the coefficient of joint variation between white matter and gray matter (CJV). Using simulated MR data, we observed the CJV to be more accurate than CVWM and CVGM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T, and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images. PMID:27014050

  13. A simplified procedure for correcting both errors and erasures of a Reed-Solomon code using the Euclidean algorithm

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.

    1987-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.

  14. Dosimetric Correction for a 4D-Computed Tomography Dataset using the Free-Form Deformation Algorithm

    NASA Astrophysics Data System (ADS)

    Markel, Daniel; Alasti, Hamideh; Chow, James C. L.

    2012-10-01

    A Free-Form Deformable (FFD) image registration algorithm in conjunction with 4D Computed Tomography (CT) images was implemented within a graphical user interface, FFD4D, for dosimetric calculations. The algorithm was developed using the cubic-B-spline method with smoothness corrections and registration point assistance to mark fudicials. Validation of the algorithm was performed with manually measured geometric differences using a QUASAR Respiratory Motion Phantom. In this work, we used the FFD algorithm to demonstrate dosimetric corrections amongst 10 breathing phases of a lung cancer patient using the 4D-CT image datasets. Different methods to enhance the image processing speed for high-performance computing were also discussed.

  15. Feature Selection and Effective Classifiers.

    ERIC Educational Resources Information Center

    Deogun, Jitender S.; Choubey, Suresh K.; Raghavan, Vijay V.; Sever, Hayri

    1998-01-01

    Develops and analyzes four algorithms for feature selection in the context of rough set methodology. Experimental results confirm the expected relationship between the time complexity of these algorithms and the classification accuracy of the resulting upper classifiers. When compared, results of upper classifiers perform better than lower…

  16. Phase-distortion correction based on stochastic parallel proportional-integral-derivative algorithm for high-resolution adaptive optics

    NASA Astrophysics Data System (ADS)

    Sun, Yang; Wu, Ke-nan; Gao, Hong; Jin, Yu-qi

    2015-02-01

    A novel optimization method, stochastic parallel proportional-integral-derivative (SPPID) algorithm, is proposed for high-resolution phase-distortion correction in wave-front sensorless adaptive optics (WSAO). To enhance the global search and self-adaptation of stochastic parallel gradient descent (SPGD) algorithm, residual error and its temporal integration of performance metric are added in to incremental control signal's calculation. On the basis of the maximum fitting rate between real wave-front and corrector, a goal value of metric is set as the reference. The residual error of the metric relative to reference is transformed into proportional and integration terms to produce adaptive step size updating law of SPGD algorithm. The adaptation of step size leads blind optimization to desired goal and helps escape from local extrema. Different from conventional proportional-integral -derivative (PID) algorithm, SPPID algorithm designs incremental control signal as PI-by-D for adaptive adjustment of control law in SPGD algorithm. Experiments of high-resolution phase-distortion correction in "frozen" turbulences based on influence function coefficients optimization were carried out respectively using 128-by-128 typed spatial light modulators, photo detector and control computer. Results revealed the presented algorithm offered better performance in both cases. The step size update based on residual error and its temporal integration was justified to resolve severe local lock-in problem of SPGD algorithm used in high -resolution adaptive optics.

  17. Evaluation and analysis of SEASAT-A Scanning Multichannel Microwave Radiometer (SMMR) Antenna Pattern Correction (APC) algorithm

    NASA Technical Reports Server (NTRS)

    Kitzis, J. L.; Kitzis, S. N.

    1979-01-01

    An evaluation of the versions of the SEASAT-A SMMR antenna pattern correction (APC) algorithm is presented. Two efforts are focused upon in the APC evaluation: the intercomparison of the interim, box, cross, and nominal APC modes; and the development of software to facilitate the creation of matched spacecraft and surface truth data sets which are located together in time and space. The problems discovered in earlier versions of the APC, now corrected, are discussed.

  18. Corrections

    NASA Astrophysics Data System (ADS)

    2012-09-01

    The feature article "Material advantage?" on the effects of technology and rule changes on sporting performance (July pp28-30) stated that sprinters are less affected by lower oxygen levels at high altitudes because they run "aerobically". They run anaerobically. The feature about the search for the Higgs boson (August pp22-26) incorrectly gave the boson's mass as roughly 125 MeV it is 125 GeV, as correctly stated elsewhere in the issue. The article also gave a wrong value for the intended collision energy of the Superconducting Super Collider, which was designed to collide protons with a total energy of 40 TeV.

  19. Correction.

    PubMed

    2015-05-22

    The Circulation Research article by Keith and Bolli (“String Theory” of c-kitpos Cardiac Cells: A New Paradigm Regarding the Nature of These Cells That May Reconcile Apparently Discrepant Results. Circ Res. 2015:116:1216-1230. doi: 10.1161/CIRCRESAHA.116.305557) states that van Berlo et al (2014) observed that large numbers of fibroblasts and adventitial cells, some smooth muscle and endothelial cells, and rare cardiomyocytes originated from c-kit positive progenitors. However, van Berlo et al reported that only occasional fibroblasts and adventitial cells derived from c-kit positive progenitors in their studies. Accordingly, the review has been corrected to indicate that van Berlo et al (2014) observed that large numbers of endothelial cells, with some smooth muscle cells and fibroblasts, and more rarely cardiomyocytes, originated from c-kit positive progenitors in their murine model. The authors apologize for this error, and the error has been noted and corrected in the online version of the article, which is available at http://circres.ahajournals.org/content/116/7/1216.full ( PMID:25999426

  20. Classifying Microorganisms.

    ERIC Educational Resources Information Center

    Baker, William P.; Leyva, Kathryn J.; Lang, Michael; Goodmanis, Ben

    2002-01-01

    Focuses on an activity in which students sample air at school and generate ideas about how to classify the microorganisms they observe. The results are used to compare air quality among schools via the Internet. Supports the development of scientific inquiry and technology skills. (DDR)

  1. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems.

    PubMed

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality.

  2. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems

    PubMed Central

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045

  3. An analytical algorithm for skew-slit imaging geometry with nonuniform attenuation correction

    SciTech Connect

    Huang Qiu; Zeng, Gengsheng L.

    2006-04-15

    The pinhole collimator is currently the collimator of choice in small animal single photon emission computed tomography (SPECT) imaging because it can provide high spatial resolution and reasonable sensitivity when the animal is placed very close to the pinhole. It is well known that if the collimator rotates around the object (e.g., a small animal) in a circular orbit to form a cone-beam imaging geometry with a planar trajectory, the acquired data are not sufficient for an exact artifact-free image reconstruction. In this paper a novel skew-slit collimator is mounted instead of the pinhole collimator in order to significantly reduce the image artifacts caused by the geometry. The skew-slit imaging geometry is a more generalized version of the pinhole imaging geometry. The multiple pinhole geometry can also be extended to the multiple-skew-slit geometry. An analytical algorithm for image reconstruction based on the tilted fan-beam inversion is developed with nonuniform attenuation compensation. Numerical simulation shows that the axial artifacts are evidently suppressed in the skew-slit images compared to the pinhole images and the attenuation correction is effective.

  4. Algorithm of geometry correction for airborne 3D scanning laser radar

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Chen, Siying; Zhang, Yinchao; Ni, Guoqiang

    2009-11-01

    Airborne three-dimensional scanning laser radar is used for wholesale scanning exploration to the target realm, then three-dimensional model can be established and target features can be identified with the characteristics of echo signals. So it is used widely and have bright prospect in the modern military, scientific research, agriculture and industry. At present, most researchers are focus on higher precision, more reliability scanning system. As the scanning platform is fixed on the aircraft, the plane cannot keep horizontal for a long time, also impossibly for a long time fly in the route without deviation. Data acquisition and the subsequence calibration rely on different equipments. These equipments bring errors both in time and space. Accurate geometry correction can amend the errors created by the process of assembly. But for the errors caused by the plane during the flight, whole imaging process should be analyzed. Take the side-roll as an example; scanning direction is inclined, so that the scanning point deviates from the original place. New direction and coordinate is the aim to us. In this paper, errors caused by the side-roll, pitch, yaw and assembly are analyzed and the algorithm routine is designed.

  5. Sound Field Directivity Correction in Synthetic Aperture Algorithm for Medical Ultrasound

    NASA Astrophysics Data System (ADS)

    Tasinkevych, Yuriy; Klimonda, Ziemowit; Lewandowski, Marcin; Nowicki, Andrzej

    The paper presents modified multi-element synthetic transmit aperture (MSTA) method for ultrasound imaging with RF echoes correction taking into account the influence of the element directivity, which property becomes significant as the element width becomes commensurable with the wavelength corresponding to the nominal frequency of the transmit signal. The angular dependence of the radiation efficiency of the transmit-receive aperture is approximated by a far-field radiation pattern resulting from the exact solution of the corresponding mixed boundary-value problem for periodic baffle system. The directivity is calculated at the nominal frequency of the excitation signal and is incorporated into the conventional MSTA algorithm. Numerical experiments performed in MATLAB® environment using data simulated by FIELD II program as well as measurement data acquired using the Ultrasonix SonixTOUCH Research system are shown. The comparison of the results obtain by the modified and conventional MSTA methods is given which reveals significant improvement of the image quality, especially in the area neighboring to the transducer's aperture, and increase of the visualization depth at the same time.

  6. Enhancement of seminal stains using background correction algorithm with colour filters.

    PubMed

    Lee, Wee Chuen; Khoo, Bee Ee; Abdullah, Ahmad Fahmi Lim

    2016-06-01

    Evidence in crime scenes available in the form of biological stains which cannot be visualized during naked eye examination can be detected by imaging their fluorescence using a combination of excitation lights and suitable filters. These combinations selectively allow the passage of fluorescence light emitted from the targeted stains. However, interference from the fluorescence generated by many of the surface materials bearing the stains often renders it difficult to visualize the stains during forensic photography. This report describes the use of background correction algorithm (BCA) to enhance the visibility of seminal stain, a biological evidence that fluoresces. While earlier reports described the use of narrow band-pass filters for other fluorescing evidences, here, we utilize BCA to enhance images captured using commonly available colour filters, yellow, orange and red. Mean-based contrast adjustment was incorporated into BCA to adjust the background brightness for achieving similarity of images' background appearance, a crucial step for ensuring success while implementing BCA. Experiment results demonstrated the effectiveness of our proposed colour filters' approach using the improved BCA in enhancing the visibility of seminal stains in varying dilutions on selected surfaces.

  7. A Robust In-Situ Warp-Correction Algorithm For VISAR Streak Camera Data at the National Ignition Facility

    SciTech Connect

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.

    2015-01-12

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  8. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm

    PubMed Central

    McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687

  9. Using classifier fusion to improve the performance of multiclass classification problems

    NASA Astrophysics Data System (ADS)

    Lynch, Robert; Willett, Peter

    2013-05-01

    The problem of multiclass classification is often modeled by breaking it down into a collection of binary classifiers, as opposed to jointly modeling all classes with a single primary classifier. Various methods can be found in the literature for decomposing the multiclass problem into a collection of binary classifiers. Typical algorithms that are studied here include each versus all remaining (EVAR), each versus all individually (EVAI), and output correction coding (OCC). With each of these methods a classifier fusion based decision rule is formulated utilizing the various binary classifiers to determine the correct classification of an unknown data point. For example, with EVAR the binary classifier with maximum output is chosen. For EVAI, the correct class is chosen using a majority voting rule, and with OCC a comparison algorithm based minimum Hamming distance metric is used. In this paper, it is demonstrated how these various methods perform utilizing the Bayesian Reduction Algorithm (BDRA) as the primary classifier. BDRA is a discrete data classification method that quantizes and reduces the dimensionality of feature data for best classification performance. In this case, BDRA is used to not only train the appropriate binary classifier pairs, but it is also used to train on the discrete classifier outputs to formulate the correct classification decision of unknown data points. In this way, it is demonstrated how to predict which binary classification based algorithm method (i.e., EVAR, EVAI, or OCC) performs best with BDRA. Experimental results are shown with real data sets taken from the Knowledge Extraction based on Evolutionary Learning (KEEL) and University of California at Irvine (UCI) Repositories of classifier Databases. In general, and for the data sets considered, it is shown that the best classification method, based on performance with unlabeled test observations, can be predicted form performance on labeled training data. Specifically, the best

  10. Ground based measurements on reflectance towards validating atmospheric correction algorithms on IRS-P6 AWiFS data

    NASA Astrophysics Data System (ADS)

    Rani Sharma, Anu; Kharol, Shailesh Kumar; Kvs, Badarinath; Roy, P. S.

    In Earth observation, the atmosphere has a non-negligible influence on the visible and infrared radiation which is strong enough to modify the reflected electromagnetic signal and at-target reflectance. Scattering of solar irradiance by atmospheric molecules and aerosol generates path radiance, which increases the apparent surface reflectance over dark surfaces while absorption by aerosols and other molecules in the atmosphere causes loss of brightness to the scene, as recorded by the satellite sensor. In order to derive precise surface reflectance from satellite image data, it is indispensable to apply the atmospheric correction which serves to remove the effects of molecular and aerosol scattering. In the present study, we have implemented a fast atmospheric correction algorithm to IRS-P6 AWiFS satellite data which can effectively retrieve surface reflectance under different atmospheric and surface conditions. The algorithm is based on MODIS climatology products and simplified use of Second Simulation of Satellite Signal in Solar Spectrum (6S) radiative transfer code, which is used to generate look-up-tables (LUTs). The algorithm requires information on aerosol optical depth for correcting the satellite dataset. The proposed method is simple and easy to implement for estimating surface reflectance from the at sensor recorded signal, on a per pixel basis. The atmospheric correction algorithm has been tested for different IRS-P6 AWiFS False color composites (FCC) covering the ICRISAT Farm, Patancheru, Hyderabad, India under varying atmospheric conditions. Ground measurements of surface reflectance representing different land use/land cover, i.e., Red soil, Chick Pea crop, Groundnut crop and Pigeon Pea crop were conducted to validate the algorithm and found a very good match between surface reflectance and atmospherically corrected reflectance for all spectral bands. Further, we aggregated all datasets together and compared the retrieved AWiFS reflectance with

  11. Guided filter and adaptive learning rate based non-uniformity correction algorithm for infrared focal plane array

    NASA Astrophysics Data System (ADS)

    Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian

    2016-05-01

    Imaging non-uniformity of infrared focal plane array (IRFPA) behaves as fixed-pattern noise superimposed on the image, which affects the imaging quality of infrared system seriously. In scene-based non-uniformity correction methods, the drawbacks of ghosting artifacts and image blurring affect the sensitivity of the IRFPA imaging system seriously and decrease the image quality visibly. This paper proposes an improved neural network non-uniformity correction method with adaptive learning rate. On the one hand, using guided filter, the proposed algorithm decreases the effect of ghosting artifacts. On the other hand, due to the inappropriate learning rate is the main reason of image blurring, the proposed algorithm utilizes an adaptive learning rate with a temporal domain factor to eliminate the effect of image blurring. In short, the proposed algorithm combines the merits of the guided filter and the adaptive learning rate. Several real and simulated infrared image sequences are utilized to verify the performance of the proposed algorithm. The experiment results indicate that the proposed algorithm can not only reduce the non-uniformity with less ghosting artifacts but also overcome the problems of image blurring in static areas.

  12. Fast and precise algorithms for calculating offset correction in single photon counting ASICs built in deep sub-micron technologies

    NASA Astrophysics Data System (ADS)

    Maj, P.

    2014-07-01

    An important trend in the design of readout electronics working in the single photon counting mode for hybrid pixel detectors is to minimize the single pixel area without sacrificing its functionality. This is the reason why many digital and analog blocks are made with the smallest, or next to smallest, transistors possible. This causes a problem with matching among the whole pixel matrix which is acceptable by designers and, of course, it should be corrected with the use of dedicated circuitry, which, by the same rule of minimizing devices, suffers from the mismatch. Therefore, the output of such a correction circuit, controlled by an ultra-small area DAC, is not only a non-linear function, but it is also often non-monotonic. As long as it can be used for proper correction of the DC operation points inside each pixel, it is acceptable, but the time required for correction plays an important role for both chip verification and the design of a big, multi-chip system. Therefore, we present two algorithms: a precise one and a fast one. The first algorithm is based on the noise hits profiles obtained during so called threshold scan procedures. The fast correction procedure is based on the trim DACs scan and it takes less than a minute in a SPC detector systems consisting of several thousands of pixels.

  13. A parallel algorithm for error correction in high-throughput short-read data on CUDA-enabled graphics hardware.

    PubMed

    Shi, Haixiang; Schmidt, Bertil; Liu, Weiguo; Müller-Wittig, Wolfgang

    2010-04-01

    Emerging DNA sequencing technologies open up exciting new opportunities for genome sequencing by generating read data with a massive throughput. However, produced reads are significantly shorter and more error-prone compared to the traditional Sanger shotgun sequencing method. This poses challenges for de novo DNA fragment assembly algorithms in terms of both accuracy (to deal with short, error-prone reads) and scalability (to deal with very large input data sets). In this article, we present a scalable parallel algorithm for correcting sequencing errors in high-throughput short-read data so that error-free reads can be available before DNA fragment assembly, which is of high importance to many graph-based short-read assembly tools. The algorithm is based on spectral alignment and uses the Compute Unified Device Architecture (CUDA) programming model. To gain efficiency we are taking advantage of the CUDA texture memory using a space-efficient Bloom filter data structure for spectrum membership queries. We have tested the runtime and accuracy of our algorithm using real and simulated Illumina data for different read lengths, error rates, input sizes, and algorithmic parameters. Using a CUDA-enabled mass-produced GPU (available for less than US$400 at any local computer outlet), this results in speedups of 12-84 times for the parallelized error correction, and speedups of 3-63 times for both sequential preprocessing and parallelized error correction compared to the publicly available Euler-SR program. Our implementation is freely available for download from http://cuda-ec.sourceforge.net .

  14. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  15. A correction algorithm to simultaneously control dual deformable mirrors in a woofer-tweeter adaptive optics system

    PubMed Central

    Li, Chaohong; Sredar, Nripun; Ivers, Kevin M.; Queener, Hope; Porter, Jason

    2010-01-01

    We present a direct slope-based correction algorithm to simultaneously control two deformable mirrors (DMs) in a woofer-tweeter adaptive optics system. A global response matrix was derived from the response matrices of each deformable mirror and the voltages for both deformable mirrors were calculated simultaneously. This control algorithm was tested and compared with a 2-step sequential control method in five normal human eyes using an adaptive optics scanning laser ophthalmoscope. The mean residual total root-mean-square (RMS) wavefront errors across subjects after adaptive optics (AO) correction were 0.128 ± 0.025 μm and 0.107 ± 0.033 μm for simultaneous and 2-step control, respectively (7.75-mm pupil). The mean intensity of reflectance images acquired after AO convergence was slightly higher for 2-step control. Radially-averaged power spectra calculated from registered reflectance images were nearly identical for all subjects using simultaneous or 2-step control. The correction performance of our new simultaneous dual DM control algorithm is comparable to 2-step control, but is more efficient. This method can be applied to any woofer-tweeter AO system. PMID:20721058

  16. Spectrum correction algorithm for detectors in airborne radioactivity monitoring equipment NH-UAV based on a ratio processing method

    NASA Astrophysics Data System (ADS)

    Cao, Ye; Tang, Xiao-Bin; Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng; Chen, Da

    2015-10-01

    The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr3) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr3 detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R2=0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible.

  17. Dose-calculation algorithms in the context of inhomogeneity corrections for high energy photon beams

    SciTech Connect

    Papanikolaou, Niko; Stathakis, Sotirios

    2009-10-15

    Radiation therapy has witnessed a plethora of innovations and developments in the past 15 years. Since the introduction of computed tomography for treatment planning there has been a steady introduction of new methods to refine treatment delivery. Imaging continues to be an integral part of the planning, but also the delivery, of modern radiotherapy. However, all the efforts of image guided radiotherapy, intensity-modulated planning and delivery, adaptive radiotherapy, and everything else that we pride ourselves in having in the armamentarium can fall short, unless there is an accurate dose-calculation algorithm. The agreement between the calculated and delivered doses is of great significance in radiation therapy since the accuracy of the absorbed dose as prescribed determines the clinical outcome. Dose-calculation algorithms have evolved greatly over the years in an effort to be more inclusive of the effects that govern the true radiation transport through the human body. In this Vision 20/20 paper, we look back to see how it all started and where things are now in terms of dose algorithms for photon beams and the inclusion of tissue heterogeneities. Convolution-superposition algorithms have dominated the treatment planning industry for the past few years. Monte Carlo techniques have an inherent accuracy that is superior to any other algorithm and as such will continue to be the gold standard, along with measurements, and maybe one day will be the algorithm of choice for all particle treatment planning in radiation therapy.

  18. Classifying Human Leg Motions with Uniaxial Piezoelectric Gyroscopes

    PubMed Central

    Tunçel, Orkun; Altun, Kerem; Barshan, Billur

    2009-01-01

    This paper provides a comparative study on the different techniques of classifying human leg motions that are performed using two low-cost uniaxial piezoelectric gyroscopes worn on the leg. A number of feature sets, extracted from the raw inertial sensor data in different ways, are used in the classification process. The classification techniques implemented and compared in this study are: Bayesian decision making (BDM), a rule-based algorithm (RBA) or decision tree, least-squares method (LSM), k-nearest neighbor algorithm (k-NN), dynamic time warping (DTW), support vector machines (SVM), and artificial neural networks (ANN). A performance comparison of these classification techniques is provided in terms of their correct differentiation rates, confusion matrices, computational cost, and training and storage requirements. Three different cross-validation techniques are employed to validate the classifiers. The results indicate that BDM, in general, results in the highest correct classification rate with relatively small computational cost. PMID:22291521

  19. Evaluation of Residual Static Corrections by Hybrid Genetic Algorithm Steepest Ascent Autostatics Inversion.Application southern Algerian fields

    NASA Astrophysics Data System (ADS)

    Eladj, Said; bansir, fateh; ouadfeul, sid Ali

    2016-04-01

    The application of genetic algorithm starts with an initial population of chromosomes representing a "model space". Chromosome chains are preferentially Reproduced based on Their fitness Compared to the total population. However, a good chromosome has a Greater opportunity to Produce offspring Compared To other chromosomes in the population. The advantage of the combination HGA / SAA is the use of a global search approach on a large population of local maxima to Improve Significantly the performance of the method. To define the parameters of the Hybrid Genetic Algorithm Steepest Ascent Auto Statics (HGA / SAA) job, we Evaluated by testing in the first stage of "Steepest Ascent," the optimal parameters related to the data used. 1- The number of iterations "Number of hill climbing iteration" is equal to 40 iterations. This parameter defines the participation of the algorithm "SA", in this hybrid approach. 2- The minimum eigenvalue for SA '= 0.8. This is linked to the quality of data and S / N ratio. To find an implementation performance of hybrid genetic algorithms in the inversion for estimating of the residual static corrections, tests Were Performed to determine the number of generation of HGA / SAA. Using the values of residual static corrections already calculated by the Approaches "SAA and CSAA" learning has Proved very effective in the building of the cross-correlation table. To determine the optimal number of generation, we Conducted a series of tests ranging from [10 to 200] generations. The application on real seismic data in southern Algeria allowed us to judge the performance and capacity of the inversion with this hybrid method "HGA / SAA". This experience Clarified the influence of the corrections quality estimated from "SAA / CSAA" and the optimum number of generation hybrid genetic algorithm "HGA" required to have a satisfactory performance. Twenty (20) generations Were enough to Improve continuity and resolution of seismic horizons. This Will allow

  20. Depth-resolved analytical model and correction algorithm for photothermal optical coherence tomography.

    PubMed

    Lapierre-Landry, Maryse; Tucker-Schwartz, Jason M; Skala, Melissa C

    2016-07-01

    Photothermal OCT (PT-OCT) is an emerging molecular imaging technique that occupies a spatial imaging regime between microscopy and whole body imaging. PT-OCT would benefit from a theoretical model to optimize imaging parameters and test image processing algorithms. We propose the first analytical PT-OCT model to replicate an experimental A-scan in homogeneous and layered samples. We also propose the PT-CLEAN algorithm to reduce phase-accumulation and shadowing, two artifacts found in PT-OCT images, and demonstrate it on phantoms and in vivo mouse tumors. PMID:27446693

  1. Depth-resolved analytical model and correction algorithm for photothermal optical coherence tomography

    PubMed Central

    Lapierre-Landry, Maryse; Tucker-Schwartz, Jason M.; Skala, Melissa C.

    2016-01-01

    Photothermal OCT (PT-OCT) is an emerging molecular imaging technique that occupies a spatial imaging regime between microscopy and whole body imaging. PT-OCT would benefit from a theoretical model to optimize imaging parameters and test image processing algorithms. We propose the first analytical PT-OCT model to replicate an experimental A-scan in homogeneous and layered samples. We also propose the PT-CLEAN algorithm to reduce phase-accumulation and shadowing, two artifacts found in PT-OCT images, and demonstrate it on phantoms and in vivo mouse tumors. PMID:27446693

  2. New baseline correction algorithm for text-line recognition with bidirectional recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle

    2013-04-01

    Many preprocessing techniques have been proposed for isolated word recognition. However, recently, recognition systems have dealt with text blocks and their compound text lines. In this paper, we propose a new preprocessing approach to efficiently correct baseline skew and fluctuations. Our approach is based on a sliding window within which the vertical position of the baseline is estimated. Segmentation of text lines into subparts is, thus, avoided. Experiments conducted on a large publicly available database (Rimes), with a BLSTM (bidirectional long short-term memory) recurrent neural network recognition system, show that our baseline correction approach highly improves performance.

  3. The Use of Anatomical Information for Molecular Image Reconstruction Algorithms: Attenuation/Scatter Correction, Motion Compensation, and Noise Reduction.

    PubMed

    Chun, Se Young

    2016-03-01

    PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples. PMID:26941855

  4. A speed of sound aberration correction algorithm for curvilinear ultrasound transducers in ultrasound-based image-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Fontanarosa, Davide; Pesente, Silvia; Pascoli, Francesco; Ermacora, Denis; Abu Rumeileh, Imad; Verhaegen, Frank

    2013-03-01

    Conventional ultrasound (US) devices use the time of flight (TOF) of reflected US pulses to calculate distances inside the scanned tissues and thus create images. The speed of sound (SOS) is assumed to be constant in all human soft tissues at a generally accepted average value of 1540 m s-1. This assumption is a source of systematic errors up to several millimeters and of image distortion in quantitative US imaging. In this work, an extension of a method recently published by Fontanarosa et al (2011 Med. Phys. 38 2665-73) is presented: the aim is to correct SOS aberrations in three-dimensional (3D) US images in those cases where a spatially co-registered computerized tomography (CT) scan is also available; the algorithm is then applicable to a more general case where the lines of view (LOV) of the US device are not necessarily parallel and coplanar, thus allowing correction also for US transducers other than linear. The algorithm was applied on a multi-modality pelvic US phantom, scanned through three different liquid layers on top of the phantom with different SOS values; the results show that the correction restores a better match between the CT and the US images, reducing the differences to sub-millimeter agreement. Fifteen clinical cases of prostate cancer patients were also investigated: the SOS corrections of prostate centroids were on average +3.1 mm (max + 4.9 mm-min + 1.3 mm). This is in excellent agreement with reports in the literature on differences between measured prostate positions by US and other techniques, where often the discrepancy was attributed to other causes.

  5. Natural and Unnatural Oil Layers on the Surface of the Gulf of Mexico Detected and Quantified in Synthetic Aperture RADAR Images with Texture Classifying Neural Network Algorithms

    NASA Astrophysics Data System (ADS)

    MacDonald, I. R.; Garcia-Pineda, O. G.; Morey, S. L.; Huffer, F.

    2011-12-01

    Effervescent hydrocarbons rise naturally from hydrocarbon seeps in the Gulf of Mexico and reach the ocean surface. This oil forms thin (~0.1 μm) layers that enhance specular reflectivity and have been widely used to quantify the abundance and distribution of natural seeps using synthetic aperture radar (SAR). An analogous process occurred at a vastly greater scale for oil and gas discharged from BP's Macondo well blowout. SAR data allow direct comparison of the areas of the ocean surface covered by oil from natural sources and the discharge. We used a texture classifying neural network algorithm to quantify the areas of naturally occurring oil-covered water in 176 SAR image collections from the Gulf of Mexico obtained between May 1997 and November 2007, prior to the blowout. Separately we also analyzed 36 SAR images collections obtained between 26 April and 30 July, 2010 while the discharged oil was visible in the Gulf of Mexico. For the naturally occurring oil, we removed pollution events and transient oceanographic effects by including only the reflectance anomalies that that recurred in the same locality over multiple images. We measured the area of oil layers in a grid of 10x10 km cells covering the entire Gulf of Mexico. Floating oil layers were observed in only a fraction of the total Gulf area amounting to 1.22x10^5 km^2. In a bootstrap sample of 2000 replications, the combined average area of these layers was 7.80x10^2 km^2 (sd 86.03). For a regional comparison, we divided the Gulf of Mexico into four quadrates along 90° W longitude, and 25° N latitude. The NE quadrate, where the BP discharge occurred, received on average 7.0% of the total natural seepage in the Gulf of Mexico (5.24 x10^2 km^2, sd 21.99); the NW quadrate received on average 68.0% of this total (5.30 x10^2 km^2, sd 69.67). The BP blowout occurred in the NE quadrate of the Gulf of Mexico; discharged oil that reached the surface drifted over a large area north of 25° N. Performing a

  6. An improved DS acoustic-seismic modality fusion algorithm based on a new cascaded fuzzy classifier for ground-moving targets classification in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Pan, Qiang; Wei, Jianming; Cao, Hongbing; Li, Na; Liu, Haitao

    2007-04-01

    A new cascaded fuzzy classifier (CFC) is proposed to implement ground-moving targets classification tasks locally at sensor nodes in wireless sensor networks (WSN). The CFC is composed of three and two binary fuzzy classifiers (BFC) respectively in seismic and acoustic signal channel in order to classify person, Light-wheeled (LW) Vehicle, and Heavywheeled (HW) Vehicle in presence of environmental background noise. Base on the CFC, a new basic belief assignment (bba) function is defined for each component BFC to give out a piece of evidence instead of a hard decision label. An evidence generator is used to synthesize available evidences from BFCs into channel evidences and channel evidences are further temporal-fused. Finally, acoustic-seismic modality fusion using Dempster-Shafer method is performed. Our implementation gives significantly better performance than the implementation with majority-voting fusion method through leave-one-out experiments.

  7. Static scene statistical algorithm for nonuniformity correction in focal-plane arrays

    NASA Astrophysics Data System (ADS)

    Catarius, Adrian M.; Seal, Michael D.

    2015-10-01

    A static scene statistical nonuniformity correction (S3NUC) method was developed based on the higher-order moments of a linear statistical model of a photodetection process. S3NUC relieves the requirement for calibrated targets or a moving scene for NUC by utilizing two data sets of different intensities but requires low scene intensity levels. The first-, second-, and third-order moments of the two data sets are used to estimate the gain and bias values for the detectors in a focal-plane array (FPA). These gain and bias values may then be used to correct the nonuniformities between detectors or to initialize other continuous calibration methods. S3NUC was successfully applied to simulated data as well as measured data at visible wavelengths.

  8. A smart phone-based robust correction algorithm for the colorimetric detection of Urinary Tract Infection.

    PubMed

    Karlsen, Haakon; Tao Dong

    2015-08-01

    This paper presents the preliminary work of developing a smart phone based application for colorimetric detection of Urinary Tract Infection. The purpose is to make a smart phone function as a practical point-of-care device for nurses or medical personnel without access to strip readers. The main challenge is the constancy of camera color perception across different illuminations and devices, which is the first step towards a practical solution without additional equipment. A reported black and white reference correction and a comprehensive color image normalization have been utilized in this work. Comprehensive color image normalization appears to be quite effective at correcting the difference in perceived color due to different illumination, and is therefore a candidate for inclusion in the further work. PMID:26736494

  9. Lorentz force correction to the Boltzmann radiation transport equation and its implications for Monte Carlo algorithms.

    PubMed

    Bouchard, Hugo; Bielajew, Alex

    2015-07-01

    To establish a theoretical framework for generalizing Monte Carlo transport algorithms by adding external electromagnetic fields to the Boltzmann radiation transport equation in a rigorous and consistent fashion. Using first principles, the Boltzmann radiation transport equation is modified by adding a term describing the variation of the particle distribution due to the Lorentz force. The implications of this new equation are evaluated by investigating the validity of Fano's theorem. Additionally, Lewis' approach to multiple scattering theory in infinite homogeneous media is redefined to account for the presence of external electromagnetic fields. The equation is modified and yields a description consistent with the deterministic laws of motion as well as probabilistic methods of solution. The time-independent Boltzmann radiation transport equation is generalized to account for the electromagnetic forces in an additional operator similar to the interaction term. Fano's and Lewis' approaches are stated in this new equation. Fano's theorem is found not to apply in the presence of electromagnetic fields. Lewis' theory for electron multiple scattering and moments, accounting for the coupling between the Lorentz force and multiple elastic scattering, is found. However, further investigation is required to develop useful algorithms for Monte Carlo and deterministic transport methods. To test the accuracy of Monte Carlo transport algorithms in the presence of electromagnetic fields, the Fano cavity test, as currently defined, cannot be applied. Therefore, new tests must be designed for this specific application. A multiple scattering theory that accurately couples the Lorentz force with elastic scattering could improve Monte Carlo efficiency. The present study proposes a new theoretical framework to develop such algorithms.

  10. Description and comparison of algorithms for correcting anisotropic magnification in cryo-EM images.

    PubMed

    Zhao, Jianhua; Brubaker, Marcus A; Benlekbir, Samir; Rubinstein, John L

    2015-11-01

    Single particle electron cryomicroscopy (cryo-EM) allows for structures of proteins and protein complexes to be determined from images of non-crystalline specimens. Cryo-EM data analysis requires electron microscope images of randomly oriented ice-embedded protein particles to be rotated and translated to allow for coherent averaging when calculating three-dimensional (3D) structures. Rotation of 2D images is usually done with the assumption that the magnification of the electron microscope is the same in all directions. However, due to electron optical aberrations, this condition is not met with some electron microscopes when used with the settings necessary for cryo-EM with a direct detector device (DDD) camera. Correction of images by linear interpolation in real space has allowed high-resolution structures to be calculated from cryo-EM images for symmetric particles. Here we describe and compare a simple real space method, a simple Fourier space method, and a somewhat more sophisticated Fourier space method to correct images for a measured anisotropy in magnification. Further, anisotropic magnification causes contrast transfer function (CTF) parameters estimated from image power spectra to have an apparent systematic astigmatism. To address this problem we develop an approach to adjust CTF parameters measured from distorted images so that they can be used with corrected images. The effect of anisotropic magnification on CTF parameters provides a simple way of detecting magnification anisotropy in cryo-EM datasets.

  11. Description and comparison of algorithms for correcting anisotropic magnification in cryo-EM images.

    PubMed

    Zhao, Jianhua; Brubaker, Marcus A; Benlekbir, Samir; Rubinstein, John L

    2015-11-01

    Single particle electron cryomicroscopy (cryo-EM) allows for structures of proteins and protein complexes to be determined from images of non-crystalline specimens. Cryo-EM data analysis requires electron microscope images of randomly oriented ice-embedded protein particles to be rotated and translated to allow for coherent averaging when calculating three-dimensional (3D) structures. Rotation of 2D images is usually done with the assumption that the magnification of the electron microscope is the same in all directions. However, due to electron optical aberrations, this condition is not met with some electron microscopes when used with the settings necessary for cryo-EM with a direct detector device (DDD) camera. Correction of images by linear interpolation in real space has allowed high-resolution structures to be calculated from cryo-EM images for symmetric particles. Here we describe and compare a simple real space method, a simple Fourier space method, and a somewhat more sophisticated Fourier space method to correct images for a measured anisotropy in magnification. Further, anisotropic magnification causes contrast transfer function (CTF) parameters estimated from image power spectra to have an apparent systematic astigmatism. To address this problem we develop an approach to adjust CTF parameters measured from distorted images so that they can be used with corrected images. The effect of anisotropic magnification on CTF parameters provides a simple way of detecting magnification anisotropy in cryo-EM datasets. PMID:26087140

  12. Stack filter classifiers

    SciTech Connect

    Porter, Reid B; Hush, Don

    2009-01-01

    Just as linear models generalize the sample mean and weighted average, weighted order statistic models generalize the sample median and weighted median. This analogy can be continued informally to generalized additive modeels in the case of the mean, and Stack Filters in the case of the median. Both of these model classes have been extensively studied for signal and image processing but it is surprising to find that for pattern classification, their treatment has been significantly one sided. Generalized additive models are now a major tool in pattern classification and many different learning algorithms have been developed to fit model parameters to finite data. However Stack Filters remain largely confined to signal and image processing and learning algorithms for classification are yet to be seen. This paper is a step towards Stack Filter Classifiers and it shows that the approach is interesting from both a theoretical and a practical perspective.

  13. Learn ++.NC: combining ensemble of classifiers with dynamically weighted consult-and-vote for efficient incremental learning of new classes.

    PubMed

    Muhlbaier, Michael D; Topalis, Apostolos; Polikar, Robi

    2009-01-01

    We have previously introduced an incremental learning algorithm Learn(++), which learns novel information from consecutive data sets by generating an ensemble of classifiers with each data set, and combining them by weighted majority voting. However, Learn(++) suffers from an inherent "outvoting" problem when asked to learn a new class omega(new) introduced by a subsequent data set, as earlier classifiers not trained on this class are guaranteed to misclassify omega(new) instances. The collective votes of earlier classifiers, for an inevitably incorrect decision, then outweigh the votes of the new classifiers' correct decision on omega(new) instances--until there are enough new classifiers to counteract the unfair outvoting. This forces Learn(++) to generate an unnecessarily large number of classifiers. This paper describes Learn(++).NC, specifically designed for efficient incremental learning of multiple new classes using significantly fewer classifiers. To do so, Learn (++).NC introduces dynamically weighted consult and vote (DW-CAV), a novel voting mechanism for combining classifiers: individual classifiers consult with each other to determine which ones are most qualified to classify a given instance, and decide how much weight, if any, each classifier's decision should carry. Experiments on real-world problems indicate that the new algorithm performs remarkably well with substantially fewer classifiers, not only as compared to its predecessor Learn(++), but also as compared to several other algorithms recently proposed for similar problems. PMID:19109088

  14. Learn ++.NC: combining ensemble of classifiers with dynamically weighted consult-and-vote for efficient incremental learning of new classes.

    PubMed

    Muhlbaier, Michael D; Topalis, Apostolos; Polikar, Robi

    2009-01-01

    We have previously introduced an incremental learning algorithm Learn(++), which learns novel information from consecutive data sets by generating an ensemble of classifiers with each data set, and combining them by weighted majority voting. However, Learn(++) suffers from an inherent "outvoting" problem when asked to learn a new class omega(new) introduced by a subsequent data set, as earlier classifiers not trained on this class are guaranteed to misclassify omega(new) instances. The collective votes of earlier classifiers, for an inevitably incorrect decision, then outweigh the votes of the new classifiers' correct decision on omega(new) instances--until there are enough new classifiers to counteract the unfair outvoting. This forces Learn(++) to generate an unnecessarily large number of classifiers. This paper describes Learn(++).NC, specifically designed for efficient incremental learning of multiple new classes using significantly fewer classifiers. To do so, Learn (++).NC introduces dynamically weighted consult and vote (DW-CAV), a novel voting mechanism for combining classifiers: individual classifiers consult with each other to determine which ones are most qualified to classify a given instance, and decide how much weight, if any, each classifier's decision should carry. Experiments on real-world problems indicate that the new algorithm performs remarkably well with substantially fewer classifiers, not only as compared to its predecessor Learn(++), but also as compared to several other algorithms recently proposed for similar problems.

  15. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization.

    PubMed

    Raghunath, N; Faber, T L; Suryanarayanan, S; Votaw, J R

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  16. Nonuniformity correction algorithm based on a noise-cancellation system for infrared focal-plane arrays

    NASA Astrophysics Data System (ADS)

    Godoy, Sebastian E.; Torres, Sergio N.; Pezoa, Jorge E.; Hayat, Majeed M.; Wang, Qi

    2007-04-01

    In this paper a novel nonuniformity correction method that compensates for the fixed-pattern noise (FPN) in infrared focal-plane array (IRFPA) sensors is developed. The proposed NUC method compensates for the additive component of the FPN statistically processing the read-out signal using a noise-cancellation system. The main assumption of the method is that a source of noise correlated to the additive noise of the IRFPA is available to the system. Under this assumption, a finite impulse response (FIR) filter is designed to synthesize an estimate of the additive noise. Moreover, exploiting the fact that the assumed source of noise is constant in time, we derive a simple expression to calculate the estimate of the additive noise. Finally, the estimate is subtracted to the raw IR imagery to obtain the corrected version of the images. The performance of the proposed system and its ability to compensate for the FPN are tested with infrared images corrupted by both real and simulated nonuniformity.

  17. High Performance Medical Classifiers

    NASA Astrophysics Data System (ADS)

    Fountoukis, S. G.; Bekakos, M. P.

    2009-08-01

    In this paper, parallelism methodologies for the mapping of machine learning algorithms derived rules on both software and hardware are investigated. Feeding the input of these algorithms with patient diseases data, medical diagnostic decision trees and their corresponding rules are outputted. These rules can be mapped on multithreaded object oriented programs and hardware chips. The programs can simulate the working of the chips and can exhibit the inherent parallelism of the chips design. The circuit of a chip can consist of many blocks, which are operating concurrently for various parts of the whole circuit. Threads and inter-thread communication can be used to simulate the blocks of the chips and the combination of block output signals. The chips and the corresponding parallel programs constitute medical classifiers, which can classify new patient instances. Measures taken from the patients can be fed both into chips and parallel programs and can be recognized according to the classification rules incorporated in the chips and the programs design. The chips and the programs constitute medical decision support systems and can be incorporated into portable micro devices, assisting physicians in their everyday diagnostic practice.

  18. A Physically Based Algorithm for Non-Blackbody Correction of Cloud-Top Temperature and Application to Convection Study

    NASA Technical Reports Server (NTRS)

    Wang, Chunpeng; Lou, Zhengzhao Johnny; Chen, Xiuhong; Zeng, Xiping; Tao, Wei-Kuo; Huang, Xianglei

    2014-01-01

    Cloud-top temperature (CTT) is an important parameter for convective clouds and is usually different from the 11-micrometers brightness temperature due to non-blackbody effects. This paper presents an algorithm for estimating convective CTT by using simultaneous passive [Moderate Resolution Imaging Spectroradiometer (MODIS)] and active [CloudSat 1 Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO)] measurements of clouds to correct for the non-blackbody effect. To do this, a weighting function of the MODIS 11-micrometers band is explicitly calculated by feeding cloud hydrometer profiles from CloudSat and CALIPSO retrievals and temperature and humidity profiles based on ECMWF analyses into a radiation transfer model.Among 16 837 tropical deep convective clouds observed by CloudSat in 2008, the averaged effective emission level (EEL) of the 11-mm channel is located at optical depth; approximately 0.72, with a standard deviation of 0.3. The distance between the EEL and cloud-top height determined by CloudSat is shown to be related to a parameter called cloud-top fuzziness (CTF), defined as the vertical separation between 230 and 10 dBZ of CloudSat radar reflectivity. On the basis of these findings a relationship is then developed between the CTF and the difference between MODIS 11-micrometers brightness temperature and physical CTT, the latter being the non-blackbody correction of CTT. Correction of the non-blackbody effect of CTT is applied to analyze convective cloud-top buoyancy. With this correction, about 70% of the convective cores observed by CloudSat in the height range of 6-10 km have positive buoyancy near cloud top, meaning clouds are still growing vertically, although their final fate cannot be determined by snapshot observations.

  19. Development of a Multiview Time Domain Imaging Algorithm (MTDI) with a Fermat Correction

    SciTech Connect

    Fisher, K A; Lehman, S K; Chambers, D H

    2004-09-22

    An imaging algorithm is presented based on the standard assumption that the total scattered field can be separated into an elastic component with monopole like dependence and an inertial component with a dipole like dependence. The resulting inversion generates two separate image maps corresponding to the monopole and dipole terms of the forward model. The complexity of imaging flaws and defects in layered elastic media is further compounded by the existence of high contrast gradients in either sound speed and/or density from layer to layer. To compensate for these gradients, we have incorporated Fermat's method of least time into our forward model to determine the appropriate delays between individual source-receiver pairs. Preliminary numerical and experimental results are in good agreement with each other.

  20. Simplified ASE correction algorithm for variable gain-flattened erbium-doped fiber amplifier.

    PubMed

    Mahdi, Mohd Adzir; Sheih, Shou-Jong; Adikan, Faisal Rafiq Mahamd

    2009-06-01

    We demonstrate a simplified algorithm to manifest the contribution of amplified spontaneous emission in variable gain-flattened Erbium-doped fiber amplifier (EDFA). The detected signal power at the input and output ports of EDFA comprises of both signal and noise. The generated amplified spontaneous emission from EDFA cannot be differentiated by photodetector which leads to underestimation of the targeted gain value. This gain penalty must be taken into consideration in order to obtain the accurate gain level. By taking the average gain penalty within the dynamic gain range, the targeted output power is set higher than the desired level. Thus, the errors are significantly reduced to less than 0.15 dB from 15 dB to 30 dB desired gain values.

  1. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    PubMed

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  2. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    PubMed

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  3. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-09-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose

  4. Extracting predictive SNPs in Crohn's disease using a vacillating genetic algorithm and a neural classifier in case-control association studies.

    PubMed

    Anekboon, Khantharat; Lursinsap, Chidchanok; Phimoltares, Suphakant; Fucharoen, Suthat; Tongsima, Sissades

    2014-01-01

    Crohn's disease is an inflammatory bowel disease. Because of strong heritability, it is possible to deploy the pattern of DNA variations, such as single nucleotide polymorphisms (SNPs), to accurately predict the state of this disease. However, there are many possible SNP subsets, which make finding a best set of SNPs to achieve the highest prediction accuracy impossible in one patient's lifetime. In this paper, a new technique is proposed that relies on chromosomes of various lengths with significant order feature selection, a new cross-over approach, and new mutation operations. Our method can find a chromosome of appropriate length with useful features. The Crohn's disease data that were gathered from case-control association studies were used to demonstrate the effectiveness of our proposed algorithm. In terms of the prediction accuracy, the proposed SNP prediction framework outperformed previously proposed techniques, including the optimum random forest (ORF), the univariate marginal distribution algorithm and support vector machine (USVM), the complimentary greedy search-based prediction algorithm (CGSP), the combinatorial search-based prediction algorithm (CSP), and discretized network flow (DNF). The performance of our framework, when tested against this real data set with a 5-fold cross-validation, was 90.4% accuracy with 87.5% sensitivity and 92.2% specificity.

  5. Is there a best classifier?

    NASA Astrophysics Data System (ADS)

    Richards, John

    2005-10-01

    The question of whether there is a preferred or best classifier to use with remotely sensed data is discussed, focussing on likely results and ease of training. By appealing in part to the No Free Lunch Theorem, it is suggested that there is really no superiority of one well trained algorithm over another, but rather it is the means by which the algorithm is employed - ie. the classification methodology - that often governs the outcomes.

  6. Baseflow separation based on a meteorology-corrected nonlinear reservoir algorithm in a typical rainy agricultural watershed

    NASA Astrophysics Data System (ADS)

    He, Shengjia; Li, Shuang; Xie, Runting; Lu, Jun

    2016-04-01

    A baseflow separation model called meteorology-corrected nonlinear reservoir algorithm (MNRA) was developed by combining nonlinear reservoir algorithm with a meteorological regression model, in which the effects of meteorological factors on daily baseflow recession were fully expressed. Using MNRA and the monitored data of daily streamflow and meteorological factors (including precipitation, evaporation, wind speed, water vapor pressure and relative humidity) from 2003 to 2012, we determined the daily, monthly, and yearly variations in baseflow from ChangLe River watershed, a typical rainy agricultural watershed in eastern China. Results showed that the estimated annual baseflow of the ChangLe River watershed varied from 18.8 cm (2004) to 61.9 cm (2012) with an average of 35.7 cm, and the baseflow index (the ratio of baseflow to streamflow) varied from 0.58 (2007) to 0.74 (2003) with an average of 0.65. Comparative analysis of different methods showed that the meteorological regression statistical model was a better alternative to the Fourier fitted curve for daily recession parameter estimation. Thus, the reliability and accuracy of the baseflow separation was obviously improved by MNRA, i.e., the Nash-Sutcliffe efficiency increased from 0.90 to 0.98. Compared with the Kalinin's and Eckhardt's recursive digital filter methods, the MNRA approach could usually be more sensitive for baseflow response to precipitation and obtained a higher goodness-of-fit for streamflow recession, especially in the area with high-level shallow groundwater and frequent rain.

  7. Adaptive scene-based correction algorithm for removal of residual fixed pattern noise in microgrid image data

    NASA Astrophysics Data System (ADS)

    Ratliff, Bradley M.; LeMaster, Daniel A.

    2012-06-01

    Pixel-to-pixel response nonuniformity is a common problem that affects nearly all focal plane array sensors. This results in a frame-to-frame fixed pattern noise (FPN) that causes an overall degradation in collected data. FPN is often compensated for through the use of blackbody calibration procedures; however, FPN is a particularly challenging problem because the detector responsivities drift relative to one another in time, requiring that the sensor be recalibrated periodically. The calibration process is obstructive to sensor operation and is therefore only performed at discrete intervals in time. Thus, any drift that occurs between calibrations (along with error in the calibration sources themselves) causes varying levels of residual calibration error to be present in the data at all times. Polarimetric microgrid sensors are particularly sensitive to FPN due to the spatial differencing involved in estimating the Stokes vector images. While many techniques exist in the literature to estimate FPN for conventional video sensors, few have been proposed to address the problem in microgrid imaging sensors. Here we present a scene-based nonuniformity correction technique for microgrid sensors that is able to reduce residual fixed pattern noise while preserving radiometry under a wide range of conditions. The algorithm requires a low number of temporal data samples to estimate the spatial nonuniformity and is computationally efficient. We demonstrate the algorithm's performance using real data from the AFRL PIRATE and University of Arizona LWIR microgrid sensors.

  8. An algorithm to correct 2D near-infrared fluorescence signals using 3D intravascular ultrasound architectural information

    NASA Astrophysics Data System (ADS)

    Mallas, Georgios; Brooks, Dana H.; Rosenthal, Amir; Vinegoni, Claudio; Calfon, Marcella A.; Razansky, R. Nika; Jaffer, Farouc A.; Ntziachristos, Vasilis

    2011-03-01

    Intravascular Near-Infrared Fluorescence (NIRF) imaging is a promising imaging modality to image vessel biology and high-risk plaques in vivo. We have developed a NIRF fiber optic catheter and have presented the ability to image atherosclerotic plaques in vivo, using appropriate NIR fluorescent probes. Our catheter consists of a 100/140 μm core/clad diameter housed in polyethylene tubing, emitting NIR laser light at a 90 degree angle compared to the fiber's axis. The system utilizes a rotational and a translational motor for true 2D imaging and operates in conjunction with a coaxial intravascular ultrasound (IVUS) device. IVUS datasets provide 3D images of the internal structure of arteries and are used in our system for anatomical mapping. Using the IVUS images, we are building an accurate hybrid fluorescence-IVUS data inversion scheme that takes into account photon propagation through the blood filled lumen. This hybrid imaging approach can then correct for the non-linear dependence of light intensity on the distance of the fluorescence region from the fiber tip, leading to quantitative imaging. The experimental and algorithmic developments will be presented and the effectiveness of the algorithm showcased with experimental results in both saline and blood-like preparations. The combined structural and molecular information obtained from these two imaging modalities are positioned to enable the accurate diagnosis of biologically high-risk atherosclerotic plaques in the coronary arteries that are responsible for heart attacks.

  9. SU-E-T-477: An Efficient Dose Correction Algorithm Accounting for Tissue Heterogeneities in LDR Brachytherapy

    SciTech Connect

    Mashouf, S; Lai, P; Karotki, A; Keller, B; Beachey, D; Pignol, J

    2014-06-01

    Purpose: Seed brachytherapy is currently used for adjuvant radiotherapy of early stage prostate and breast cancer patients. The current standard for calculation of dose surrounding the brachytherapy seeds is based on American Association of Physicist in Medicine Task Group No. 43 (TG-43 formalism) which generates the dose in homogeneous water medium. Recently, AAPM Task Group No. 186 emphasized the importance of accounting for tissue heterogeneities. This can be done using Monte Carlo (MC) methods, but it requires knowing the source structure and tissue atomic composition accurately. In this work we describe an efficient analytical dose inhomogeneity correction algorithm implemented using MIM Symphony treatment planning platform to calculate dose distributions in heterogeneous media. Methods: An Inhomogeneity Correction Factor (ICF) is introduced as the ratio of absorbed dose in tissue to that in water medium. ICF is a function of tissue properties and independent of source structure. The ICF is extracted using CT images and the absorbed dose in tissue can then be calculated by multiplying the dose as calculated by the TG-43 formalism times ICF. To evaluate the methodology, we compared our results with Monte Carlo simulations as well as experiments in phantoms with known density and atomic compositions. Results: The dose distributions obtained through applying ICF to TG-43 protocol agreed very well with those of Monte Carlo simulations as well as experiments in all phantoms. In all cases, the mean relative error was reduced by at least 50% when ICF correction factor was applied to the TG-43 protocol. Conclusion: We have developed a new analytical dose calculation method which enables personalized dose calculations in heterogeneous media. The advantages over stochastic methods are computational efficiency and the ease of integration into clinical setting as detailed source structure and tissue segmentation are not needed. University of Toronto, Natural Sciences and

  10. Effects of defect pixel correction algorithms for x-ray detectors on image quality in planar projection and volumetric CT data sets

    NASA Astrophysics Data System (ADS)

    Kuttig, Jan; Steiding, Christian; Hupfer, Martin; Karolczak, Marek; Kolditz, Daniel

    2015-09-01

    In this study we compared various defect pixel correction methods for reducing artifact appearance within projection images used for computed tomography (CT) reconstructions. Defect pixel correction algorithms were examined with respect to their artifact behaviour within planar projection images as well as in volumetric CT reconstructions. We investigated four algorithms: nearest neighbour, linear and adaptive linear interpolation, and a frequency-selective spectral-domain approach. To characterise the quality of each algorithm in planar image data, we inserted line defects of varying widths and orientations into images. The structure preservation of each algorithm was analysed by corrupting and correcting the image of a slit phantom pattern and by evaluating its line spread function (LSF). The noise preservation was assessed by interpolating corrupted flat images and estimating the noise power spectrum (NPS) of the interpolated region. For the volumetric investigations, we examined the structure and noise preservation within a structured aluminium foam, a mid-contrast cone-beam phantom and a homogeneous Polyurethane (PUR) cylinder. The frequency-selective algorithm showed the best structure and noise preservation for planar data of the correction methods tested. For volumetric data it still showed the best noise preservation, whereas the structure preservation was outperformed by the linear interpolation. The frequency-selective spectral-domain approach in the correction of line defects is recommended for planar image data, but its abilities within high-contrast volumes are restricted. In that case, the application of a simple linear interpolation might be the better choice to correct line defects within projection images used for CT.

  11. Matrix and position correction of shuffler assays by application of the alternating conditional expectation algorithm to shuffler data

    SciTech Connect

    Pickrell, M M; Rinard, P M

    1992-01-01

    The {sup 252}Cf shuffler assays fissile uranium and plutonium using active neutron interrogation and then counting the induced delayed neutrons. Using the shuffler, we conducted over 1700 assays of 55-gal. drums with 28 different matrices and several different fissionable materials. We measured the drums to dispose the matrix and position effects on {sup 252}Cf shuffler assays. We used several neutron flux monitors during irradiation and kept statistics on the count rates of individual detector banks. The intent of these measurements was to gauge the effect of the matrix independently from the uranium assay. Although shufflers have previously been equipped neutron monitors, the functional relationship between the flux monitor sepals and the matrix-induced perturbation has been unknown. There are several flux monitors so the problem is multivariate, and the response is complicated. Conventional regression techniques cannot address complicated multivariate problems unless the underlying functional form and approximate parameter values are known in advance. Neither was available in this case. To address this problem, we used a new technique called alternating conditional expectations (ACE), which requires neither the functional relationship nor the initial parameters. The ACE algorithm develops the functional form and performs a numerical regression from only the empirical data. We applied the ACE algorithm to the shuffler-assay and flux-monitor data and developed an analytic function for the matrix correction. This function was optimized using conventional multivariate techniques. We were able to reduce the matrix-induced-bias error for homogeneous samples to 12.7%. The bias error for inhomogeneous samples was reduced to 13.5%. These results used only a few adjustable parameters compared to the number of available data points; the data were not over fit,'' but rather the results are general and robust.

  12. [A New HAC Unsupervised Classifier Based on Spectral Harmonic Analysis].

    PubMed

    Yang, Ke-ming; Wei, Hua-feng; Shi, Gang-qiang; Sun, Yang-yang; Liu, Fei

    2015-07-01

    Hyperspectral images classification is one of the important methods to identify image information, which has great significance for feature identification, dynamic monitoring and thematic information extraction, etc. Unsupervised classification without prior knowledge is widely used in hyperspectral image classification. This article proposes a new hyperspectral images unsupervised classification algorithm based on harmonic analysis(HA), which is called the harmonic analysis classifer (HAC). First, the HAC algorithm counts the first harmonic component and draws the histogram, so it can determine the initial feature categories and the pixel of cluster centers according to the number and location of the peak. Then, the algorithm is to map the waveform information of pixels to be classified spectrum into the feature space made up of harmonic decomposition times, amplitude and phase, and the similar features can be gotten together in the feature space, these pixels will be classified according to the principle of minimum distance. Finally, the algorithm computes the Euclidean distance of these pixels between cluster center, and merges the initial classification by setting the distance threshold. so the HAC can achieve the purpose of hyperspectral images classification. The paper collects spectral curves of two feature categories, and obtains harmonic decomposition times, amplitude and phase after harmonic analysis, the distribution of HA components in the feature space verified the correctness of the HAC. While the HAC algorithm is applied to EO-1 satellite Hyperion hyperspectral image and obtains the results of classification. Comparing with the hyperspectral image classifying results of K-MEANS, ISODATA and HAC classifiers, the HAC, as a unsupervised classification method, is confirmed to have better application on hyperspectral image classification. PMID:26717767

  13. A two-dimensional, finite-element, flux-corrected transport algorithm for the solution of gas discharge problems

    NASA Astrophysics Data System (ADS)

    Georghiou, G. E.; Morrow, R.; Metaxas, A. C.

    2000-10-01

    An improved finite-element flux-corrected transport (FE-FCT) scheme, which was demonstrated in one dimension by the authors, is now extended to two dimensions and applied to gas discharge problems. The low-order positive ripple-free scheme, required to produce a FCT algorithm, is obtained by introducing diffusion to the high-order scheme (two-step Taylor-Galerkin). A self-adjusting variable diffusion coefficient is introduced, which reduces the high-order scheme to the equivalent of the upwind difference scheme, but without the complexities of an upwind scheme in a finite-element setting. Results are presented which show that the high-order scheme reduces to the equivalent of upwinding when the new diffusion coefficient is used. The proposed FCT scheme is shown to give similar results in comparison to a finite-difference time-split FCT code developed by Boris and Book. Finally, the new method is applied for the first time to a streamer propagation problem in its two-dimensional form.

  14. Distortion correction and calibration of intra-operative spine X-ray images using a constrained DLT algorithm.

    PubMed

    Bertelsen, A; Garin-Muga, A; Echeverría, M; Gómez, E; Borro, D

    2014-10-01

    This work presents an automatic method for distortion correction and calibration of intra-operative spine X-ray images, a fundamental step for the use of this modality in computer and robotic assisted surgeries. Our method is based on a prototype calibration drum, attached to the c-arm intensifier during the intervention. The projections of its embedded fiducial beads onto the X-ray images are segmented by the proposed method, which uses its calculated centroids to undo the distortion and, afterwards, calibrate the c-arm. For the latter purpose, we propose the use of a constrained version of the well known Direct Linear Transform (DLT) algorithm, reducing its degrees of freedom from 11 to 3. Experimental evaluation of our method is included in this work, showing that it is fast and more accurate than other existing methods. The low segmentation error level also ensures accurate calibration of the c-arm, with an expected error of 4% in the computation of its focal distance. PMID:24993596

  15. Geometric correction of deformed chromosomes for automatic Karyotyping.

    PubMed

    Khan, Shadab; DSouza, Alisha; Sanches, João; Ventura, Rodrigo

    2012-01-01

    Automatic Karyotyping is the process of classifying chromosomes from an unordered karyogram into their respective classes to create an ordered karyogram. Automatic karyotyping algorithms typically perform geometrical correction of deformed chromosomes for feature extraction; these features are used by classifier algorithms for classifying the chromosomes. Karyograms of bone marrow cells are known to have poor image quality. An example of such karyograms is the Lisbon-K(1) (LK(1)) dataset that is used in our work. Thus, to correct the geometrical deformation of chromosomes from LK(1), a robust method to obtain the medial axis of the chromosome was necessary. To address this problem, we developed an algorithm that uses the seed points to make a primary prediction. Subsequently, the algorithm computes the distance of boundary from the predicted point, and the gradients at algorithm-specified points on the boundary to compute two auxiliary predictions. Primary prediction is then corrected using auxiliary predictions, and a final prediction is obtained to be included in the seed region. A medial axis is obtained this way, which is further used for geometrical correction of the chromosomes. This algorithm was found capable of correcting geometrical deformations in even highly distorted chromosomes with forked ends.

  16. Potassium-based algorithm allows correction for the hematocrit bias in quantitative analysis of caffeine and its major metabolite in dried blood spots.

    PubMed

    De Kesel, Pieter M M; Capiau, Sara; Stove, Veronique V; Lambert, Willy E; Stove, Christophe P

    2014-10-01

    Although dried blood spot (DBS) sampling is increasingly receiving interest as a potential alternative to traditional blood sampling, the impact of hematocrit (Hct) on DBS results is limiting its final breakthrough in routine bioanalysis. To predict the Hct of a given DBS, potassium (K(+)) proved to be a reliable marker. The aim of this study was to evaluate whether application of an algorithm, based upon predicted Hct or K(+) concentrations as such, allowed correction for the Hct bias. Using validated LC-MS/MS methods, caffeine, chosen as a model compound, was determined in whole blood and corresponding DBS samples with a broad Hct range (0.18-0.47). A reference subset (n = 50) was used to generate an algorithm based on K(+) concentrations in DBS. Application of the developed algorithm on an independent test set (n = 50) alleviated the assay bias, especially at lower Hct values. Before correction, differences between DBS and whole blood concentrations ranged from -29.1 to 21.1%. The mean difference, as obtained by Bland-Altman comparison, was -6.6% (95% confidence interval (CI), -9.7 to -3.4%). After application of the algorithm, differences between corrected and whole blood concentrations lay between -19.9 and 13.9% with a mean difference of -2.1% (95% CI, -4.5 to 0.3%). The same algorithm was applied to a separate compound, paraxanthine, which was determined in 103 samples (Hct range, 0.17-0.47), yielding similar results. In conclusion, a K(+)-based algorithm allows correction for the Hct bias in the quantitative analysis of caffeine and its metabolite paraxanthine.

  17. A Portable Ground-Based Atmospheric Monitoring System (PGAMS) for the Calibration and Validation of Atmospheric Correction Algorithms Applied to Aircraft and Satellite Images

    NASA Technical Reports Server (NTRS)

    Schiller, Stephen; Luvall, Jeffrey C.; Rickman, Doug L.; Arnold, James E. (Technical Monitor)

    2000-01-01

    Detecting changes in the Earth's environment using satellite images of ocean and land surfaces must take into account atmospheric effects. As a result, major programs are underway to develop algorithms for image retrieval of atmospheric aerosol properties and atmospheric correction. However, because of the temporal and spatial variability of atmospheric transmittance it is very difficult to model atmospheric effects and implement models in an operational mode. For this reason, simultaneous in situ ground measurements of atmospheric optical properties are vital to the development of accurate atmospheric correction techniques. Presented in this paper is a spectroradiometer system that provides an optimized set of surface measurements for the calibration and validation of atmospheric correction algorithms. The Portable Ground-based Atmospheric Monitoring System (PGAMS) obtains a comprehensive series of in situ irradiance, radiance, and reflectance measurements for the calibration of atmospheric correction algorithms applied to multispectral. and hyperspectral images. The observations include: total downwelling irradiance, diffuse sky irradiance, direct solar irradiance, path radiance in the direction of the north celestial pole, path radiance in the direction of the overflying satellite, almucantar scans of path radiance, full sky radiance maps, and surface reflectance. Each of these parameters are recorded over a wavelength range from 350 to 1050 nm in 512 channels. The system is fast, with the potential to acquire the complete set of observations in only 8 to 10 minutes depending on the selected spatial resolution of the sky path radiance measurements

  18. A dose calculation algorithm with correction for proton-nucleus interactions in non-water materials for proton radiotherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Inaniwa, T.; Kanematsu, N.; Sato, S.; Kohno, R.

    2016-01-01

    In treatment planning for proton radiotherapy, the dose measured in water is applied to the patient dose calculation with density scaling by stopping power ratio {ρ\\text{S}} . Since the body tissues are chemically different from water, this approximation may cause dose calculation errors, especially due to differences in nuclear interactions. We proposed and validated an algorithm for correcting these errors. The dose in water is decomposed into three constituents according to the physical interactions of protons in water: the dose from primary protons continuously slowing down by electromagnetic interactions, the dose from protons scattered by elastic and/or inelastic interactions, and the dose resulting from nonelastic interactions. The proportions of the three dose constituents differ between body tissues and water. We determine correction factors for the proportion of dose constituents with Monte Carlo simulations in various standard body tissues, and formulated them as functions of their {ρ\\text{S}} for patient dose calculation. The influence of nuclear interactions on dose was assessed by comparing the Monte Carlo simulated dose and the uncorrected dose in common phantom materials. The influence around the Bragg peak amounted to  -6% for polytetrafluoroethylene and 0.3% for polyethylene. The validity of the correction method was confirmed by comparing the simulated and corrected doses in the materials. The deviation was below 0.8% for all materials. The accuracy of the correction factors derived with Monte Carlo simulations was separately verified through irradiation experiments with a 235 MeV proton beam using common phantom materials. The corrected doses agreed with the measurements within 0.4% for all materials except graphite. The influence on tumor dose was assessed in a prostate case. The dose reduction in the tumor was below 0.5%. Our results verify that this algorithm is practical and accurate for proton radiotherapy treatment planning, and

  19. Differences in aerosol absorption Ångström exponents between correction algorithms for a particle soot absorption photometer measured on the South African Highveld

    NASA Astrophysics Data System (ADS)

    Backman, J.; Virkkula, A.; Vakkari, V.; Beukes, J. P.; Van Zyl, P. G.; Josipovic, M.; Piketh, S.; Tiitta, P.; Chiloane, K.; Petäjä, T.; Kulmala, M.; Laakso, L.

    2014-12-01

    Absorption Ångström exponents (AAEs) calculated from filter-based absorption measurements are often used to give information on the origin of the ambient aerosol, for example, to distinguish between urban pollution and biomass burning aerosol. Filter-based absorption measurements are widely used and are common at aerosol monitoring stations globally. Several correction algorithms are used to account for artefacts associated with filter-based absorption techniques. These algorithms are of profound importance when determining the absolute amount of absorption by the aerosol. However, this study shows that there are substantial differences between the AAEs calculated from these corrections. Depending on the used correction, AAEs can change by as much as 46%. The study also highlights that the difference between AAEs calculated using different corrections can lead to conflicting conclusions on the type of aerosol when using the same data set. The AAE ranged between 1.17 for non-corrected data to 1.96 for the correction that gave the greatest values. Furthermore, the study implies that the AAEs reported for a site depend on at which filter transmittance the filter is changed. In this work, the AAEs were calculated from data measured with a three-wavelength particle soot absorption photometer (PSAP) at Elandsfontein on the South African Highveld for 23 months. The sample air of the PSAP was diluted to prolong filter change intervals, by a factor of 15. The correlation coefficient between the dilution-corrected PSAP and a non-diluted Multi-Angle Absorption Photometer (MAAP) was 0.9. Thus, the study also shows that the applicability of the PSAP can be extended to remote sites that are not often visited or suffer from high levels of pollution.

  20. Technical Note: Modification of the standard gain correction algorithm to compensate for the number of used reference flat frames in detector performance studies

    SciTech Connect

    Konstantinidis, Anastasios C.; Olivo, Alessandro; Speller, Robert D.

    2011-12-15

    Purpose: The x-ray performance evaluation of digital x-ray detectors is based on the calculation of the modulation transfer function (MTF), the noise power spectrum (NPS), and the resultant detective quantum efficiency (DQE). The flat images used for the extraction of the NPS should not contain any fixed pattern noise (FPN) to avoid contamination from nonstochastic processes. The ''gold standard'' method used for the reduction of the FPN (i.e., the different gain between pixels) in linear x-ray detectors is based on normalization with an average reference flat-field. However, the noise in the corrected image depends on the number of flat frames used for the average flat image. The aim of this study is to modify the standard gain correction algorithm to make it independent on the used reference flat frames. Methods: Many publications suggest the use of 10-16 reference flat frames, while other studies use higher numbers (e.g., 48 frames) to reduce the propagated noise from the average flat image. This study quantifies experimentally the effect of the number of used reference flat frames on the NPS and DQE values and appropriately modifies the gain correction algorithm to compensate for this effect. Results: It is shown that using the suggested gain correction algorithm a minimum number of reference flat frames (i.e., down to one frame) can be used to eliminate the FPN from the raw flat image. This saves computer memory and time during the x-ray performance evaluation. Conclusions: The authors show that the method presented in the study (a) leads to the maximum DQE value that one would have by using the conventional method and very large number of frames and (b) has been compared to an independent gain correction method based on the subtraction of flat-field images, leading to identical DQE values. They believe this provides robust validation of the proposed method.

  1. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    PubMed

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-03-01

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169

  2. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    PubMed

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-03-01

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  3. Differences in aerosol absorption Ångström exponents between correction algorithms for particle soot absorption photometer measured on South African Highveld

    NASA Astrophysics Data System (ADS)

    Backman, J.; Virkkula, A.; Vakkari, V.; Beukes, J. P.; Van Zyl, P.; Josipovic, M.; Piketh, S.; Tiitta, P.; Chiloane, K.; Petäjä, T.; Kulmala, M.; Laakso, L.

    2014-09-01

    Absorption Ångstrom exponents (AAE) calculated from filter-based absorption measurements are often used to give information on the origin of the ambient aerosol, for example to distinguish between urban pollution and biomass burning aerosol. Filter-based absorption measurements are a widely used method and are commonly used at aerosol monitoring stations globally. Several correction algorithms are used to account for the artifacts associated with filter-based absorption techniques. These algorithms are of profound importance when determining the absolute amount of absorption by the aerosol. However, this study shows that there are significant differences between the AAEs calculated from these corrections. The study also shows that the difference between AAEs calculated using different corrections can lead to conflicting conclusions on the type of aerosol for the same data set. In this work the AAEs were calculated from data measured with a three-wavelength Particle Soot Absorption Photometer (PSAP) at Elandsfontein on deployed on the South African Highveld for 23 months. The sample air of the PSAP was diluted to prolong filter change intervals. The dilution-corrected PSAP showed a good agreement with a non-diluted MAAP. Thus, the study also shows that the applicability of the PSAP can be extended to remote sites are not often visited or suffer from high levels of pollution.

  4. Quadrupole Alignment and Trajectory Correction for Future Linear Colliders: SLC Tests of a Dispersion-Free Steering Algorithm

    SciTech Connect

    Assmann, R

    2004-06-08

    The feasibility of future linear colliders depends on achieving very tight alignment and steering tolerances. All proposals (NLC, JLC, CLIC, TESLA and S-BAND) currently require a total emittance growth in the main linac of less than 30-100% [1]. This should be compared with a 100% emittance growth in the much smaller SLC linac [2]. Major advances in alignment and beam steering techniques beyond those used in the SLC are necessary for the next generation of linear colliders. In this paper, we present an experimental study of quadrupole alignment with a dispersion-free steering algorithm. A closely related method (wakefield-free steering) takes into account wakefield effects [3]. However, this method can not be studied at the SLC. The requirements for future linear colliders lead to new and unconventional ideas about alignment and beam steering. For example, no dipole correctors are foreseen for the standard trajectory correction in the NLC [4]; beam steering will be done by moving the quadrupole positions with magnet movers. This illustrates the close symbiosis between alignment, beam steering and beam dynamics that will emerge. It is no longer possible to consider the accelerator alignment as static with only a few surveys and realignments per year. The alignment in future linear colliders will be a dynamic process in which the whole linac, with thousands of beam-line elements, is aligned in a few hours or minutes, while the required accuracy of about 5 pm for the NLC quadrupole alignment [4] is a factor of 20 higher than in existing accelerators. The major task in alignment and steering is the accurate determination of the optimum beam-line position. Ideally one would like all elements to be aligned along a straight line. However, this is not practical. Instead a ''smooth curve'' is acceptable as long as its wavelength is much longer than the betatron wavelength of the accelerated beam. Conventional alignment methods are limited in accuracy by errors in the survey

  5. Algorithm for X-ray scatter, beam-hardening, and beam profile correction in diagnostic (kilovoltage) and treatment (megavoltage) cone beam CT.

    PubMed

    Maltz, Jonathan S; Gangadharan, Bijumon; Bose, Supratik; Hristov, Dimitre H; Faddegon, Bruce A; Paidi, Ajay; Bani-Hashemi, Ali R

    2008-12-01

    Quantitative reconstruction of cone beam X-ray computed tomography (CT) datasets requires accurate modeling of scatter, beam-hardening, beam profile, and detector response. Typically, commercial imaging systems use fast empirical corrections that are designed to reduce visible artifacts due to incomplete modeling of the image formation process. In contrast, Monte Carlo (MC) methods are much more accurate but are relatively slow. Scatter kernel superposition (SKS) methods offer a balance between accuracy and computational practicality. We show how a single SKS algorithm can be employed to correct both kilovoltage (kV) energy (diagnostic) and megavoltage (MV) energy (treatment) X-ray images. Using MC models of kV and MV imaging systems, we map intensities recorded on an amorphous silicon flat panel detector to water-equivalent thicknesses (WETs). Scattergrams are derived from acquired projection images using scatter kernels indexed by the local WET values and are then iteratively refined using a scatter magnitude bounding scheme that allows the algorithm to accommodate the very high scatter-to-primary ratios encountered in kV imaging. The algorithm recovers radiological thicknesses to within 9% of the true value at both kV and megavolt energies. Nonuniformity in CT reconstructions of homogeneous phantoms is reduced by an average of 76% over a wide range of beam energies and phantom geometries.

  6. Converting local spectral and spatial information from a priori classifiers into contextual knowledge for impervious surface classification

    NASA Astrophysics Data System (ADS)

    Luo, Li; Mountrakis, Giorgos

    2011-09-01

    A classification model was demonstrated that explored spectral and spatial contextual information from previously classified neighbors to improve classification of remaining unclassified pixels. The classification was composed by two major steps, the a priori and the a posteriori classifications. The a priori algorithm classified the less difficult image portion. The a posteriori classifier operated on the more challenging image parts and strived to enhance accuracy by converting classified information from the a priori process into specific knowledge. The novelty of this work relies on the substitution of image-wide information with local spectral representations and spatial correlations, in essence classifying each pixel using exclusively neighboring behavior. Furthermore, the a posteriori classifier is a simple and intuitive algorithm, adjusted to perform in a localized setting for the task requirements. A 2001 and a 2006 Landsat scene from Central New York were used to assess the performance on an impervious classification task. The proposed method was compared with a back propagation neural network. Kappa statistic values in the corresponding applicable datasets increased from 18.67 to 24.05 for the 2006 scene, and from 22.92 to 35.76 for the 2001 scene classification, mostly correcting misclassifications between impervious and soil pixels. This finding suggests that simple classifiers have the ability to surpass complex classifiers through incorporation of partial results and an elegant multi-process framework.

  7. Correction of Faulty Sensors in Phased Array Radars Using Symmetrical Sensor Failure Technique and Cultural Algorithm with Differential Evolution

    PubMed Central

    Khan, S. U.; Qureshi, I. M.; Zaman, F.; Shoaib, B.; Naveed, A.; Basit, A.

    2014-01-01

    Three issues regarding sensor failure at any position in the antenna array are discussed. We assume that sensor position is known. The issues include raise in sidelobe levels, displacement of nulls from their original positions, and diminishing of null depth. The required null depth is achieved by making the weight of symmetrical complement sensor passive. A hybrid method based on memetic computing algorithm is proposed. The hybrid method combines the cultural algorithm with differential evolution (CADE) which is used for the reduction of sidelobe levels and placement of nulls at their original positions. Fitness function is used to minimize the error between the desired and estimated beam patterns along with null constraints. Simulation results for various scenarios have been given to exhibit the validity and performance of the proposed algorithm. PMID:24688440

  8. Correction of faulty sensors in phased array radars using symmetrical sensor failure technique and cultural algorithm with differential evolution.

    PubMed

    Khan, S U; Qureshi, I M; Zaman, F; Shoaib, B; Naveed, A; Basit, A

    2014-01-01

    Three issues regarding sensor failure at any position in the antenna array are discussed. We assume that sensor position is known. The issues include raise in sidelobe levels, displacement of nulls from their original positions, and diminishing of null depth. The required null depth is achieved by making the weight of symmetrical complement sensor passive. A hybrid method based on memetic computing algorithm is proposed. The hybrid method combines the cultural algorithm with differential evolution (CADE) which is used for the reduction of sidelobe levels and placement of nulls at their original positions. Fitness function is used to minimize the error between the desired and estimated beam patterns along with null constraints. Simulation results for various scenarios have been given to exhibit the validity and performance of the proposed algorithm.

  9. Practical Atmospheric Correction Algorithms for a Multi-Spectral Sensor From the Visible Through the Thermal Spectral Regions

    SciTech Connect

    Borel, C.C.; Villeneuve, P.V.; Clodium, W.B.; Szymenski, J.J.; Davis, A.B.

    1999-04-04

    Deriving information about the Earth's surface requires atmospheric corrections of the measured top-of-the-atmosphere radiances. One possible path is to use atmospheric radiative transfer codes to predict how the radiance leaving the ground is affected by the scattering and attenuation. In practice the atmosphere is usually not well known and thus it is necessary to use more practical methods. The authors will describe how to find dark surfaces, estimate the atmospheric optical depth, estimate path radiance and identify thick clouds using thresholds on reflectance and NDVI and columnar water vapor. The authors describe a simple method to correct a visible channel contaminated by a thin cirrus clouds.

  10. Dynamic system classifier

    NASA Astrophysics Data System (ADS)

    Pumpe, Daniel; Greiner, Maksim; Müller, Ewald; Enßlin, Torsten A.

    2016-07-01

    Stochastic differential equations describe well many physical, biological, and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time-dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of the DSC to oscillation processes with a time-dependent frequency ω (t ) and damping factor γ (t ) . Although real systems might be more complex, this simple oscillator captures many characteristic features. The ω and γ time lines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiments show that such classifiers perform well even in the low signal-to-noise regime.

  11. Dynamic system classifier.

    PubMed

    Pumpe, Daniel; Greiner, Maksim; Müller, Ewald; Enßlin, Torsten A

    2016-07-01

    Stochastic differential equations describe well many physical, biological, and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time-dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of the DSC to oscillation processes with a time-dependent frequency ω(t) and damping factor γ(t). Although real systems might be more complex, this simple oscillator captures many characteristic features. The ω and γ time lines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiments show that such classifiers perform well even in the low signal-to-noise regime.

  12. Dynamic system classifier.

    PubMed

    Pumpe, Daniel; Greiner, Maksim; Müller, Ewald; Enßlin, Torsten A

    2016-07-01

    Stochastic differential equations describe well many physical, biological, and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time-dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of the DSC to oscillation processes with a time-dependent frequency ω(t) and damping factor γ(t). Although real systems might be more complex, this simple oscillator captures many characteristic features. The ω and γ time lines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiments show that such classifiers perform well even in the low signal-to-noise regime. PMID:27575101

  13. Variable-step constant statistics algorithm for removing residual fixed pattern noise of infrared images as second non-uniformity correction

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Nie, Hong-Bin; Hou, Qing-Yu; Cao, Yi-Ming

    2009-11-01

    Regarding the appearance of fixed pattern noise (FPN) in the IR images of an IR observation system introduced by errors in assembly, environment, etc. Non-Uniformity Correction (NUC) is an important technique for IRFPA. Because the real radiation response of pixels in the given dynamic range is nonlinear and the existence of 1/f noise, especially the high temperature scaling point changes the thermal balance of the IR observation system, using the traditional linear approximate method (temperature scaling method) is hard to obtain the perfect corrective images. On the other hand, because of Scene-Based Non-Uniformity Correction (SBNUC) does not rely on specialized hardware, SBNUC is very attractive alternative to radiometric calibration for infrared sensors, thereinto, Constant Statistics (CS) is the best known approach, but it lies on the scene content and has intimate correlation with the sample quantity. So, in this paper, we present a novel approach which inherits the rapidity of temperature scaling method and also consider the astringency of CS, using variable-step constant statistics (VSCS) as second non-uniformity correction in the spatial and time domains of infrared images to eliminate the residual fixed pattern noise which resulted from the theoretical and methodological errors of temperature scaling method. The experimental result for the real infrared images data is a solution which effectively eliminates the residual fixed pattern noise, and at the same time, it proved the effectiveness of this algorithm.

  14. Analysis of vegetation by the application of a physically-based atmospheric correction algorithm to OLI data: a case study of Leonessa Municipality, Italy

    NASA Astrophysics Data System (ADS)

    Mei, Alessandro; Manzo, Ciro; Petracchini, Francesco; Bassani, Cristiana

    2016-04-01

    Remote sensing techniques allow to estimate vegetation parameters related to large areas for forest health evaluation and biomass estimation. Moreover, the parametrization of specific indices such as Normalized Difference Vegetation Index (NDVI) allows to study biogeochemical cycles and radiative energy transfer processes between soil/vegetation and atmosphere. This paper focuses on the evaluation of vegetation cover analysis in Leonessa Municipality, Latium Region (Italy) by the use of 2015 Landsat 8 applying the OLI@CRI (OLI ATmospherically Corrected Reflectance Imagery) algorithm developed following the procedure described in Bassani et al. 2015. The OLI@CRI is based on 6SV radiative transfer model (Kotchenova et al., 2006) ables to simulate the radiative field in the atmosphere-earth coupled system. NDVI was derived from the OLI corrected image. This index, widely used for biomass estimation and vegetation analysis cover, considers the sensor channels falling in the near infrared and red spectral regions which are sensitive to chlorophyll absorption and cell structure. The retrieved product was then spatially resampled at MODIS image resolution and then validated by the NDVI of MODIS considered as reference. The physically-based OLI@CRI algorithm also provides the incident solar radiation at ground at the acquisition time by 6SV simulation. Thus, the OLI@CRI algorithm completes the remote sensing dataset required for a comprehensive analysis of the sub-regional biomass production by using data of the new generation remote sensing sensor and an atmospheric radiative transfer model. If the OLI@CRI algorithm is applied to a temporal series of OLI data, the influence of the solar radiation on the above-ground vegetation can be analysed as well as vegetation index variation.

  15. An algorithm for estimation and correction of anisotropic magnification distortion of cryo-EM images without need of pre-calibration.

    PubMed

    Yu, Guimei; Li, Kunpeng; Liu, Yue; Chen, Zhenguo; Wang, Zhiqing; Yan, Rui; Klose, Thomas; Tang, Liang; Jiang, Wen

    2016-08-01

    Anisotropic magnification distortion of TEM images (mainly the elliptic distortion) has been recently found as a potential resolution-limiting factor in single particle 3-D reconstruction. Elliptic distortions of ∼1-3% have been reported for multiple microscopes under low magnification settings (e.g., 18,000×), which significantly limited the achievable resolution of single particle 3-D reconstruction, especially for large particles. Here we report a generic algorithm that formulates the distortion correction problem as a generalized 2-D alignment task and estimates the distortion parameters directly from the particle images. Unlike the present pre-calibration methods, our computational method is applicable to all datasets collected at a broad range of magnifications using any microscope without need of additional experimental measurements. Moreover, the per-micrograph and/or per-particle level elliptic distortion estimation in our method could resolve potential distortion variations within a cryo-EM dataset, and further improve the 3-D reconstructions relative to constant-value correction by the pre-calibration methods. With successful applications to multiple datasets and cross-validation with the pre-calibration method, we have demonstrated the validity and robustness of our algorithm in estimating the distortion; correction of the elliptic distortion significantly improved the achievable resolutions by ∼1-3 folds and enabled 3-D reconstructions of multiple viral structures at 2.4-2.6Å resolutions. The resolution limits with elliptic distortion and the amounts of resolution improvements with distortion correction were found to strongly correlate with the product of the particle size and the amount of distortion, which can help assess if elliptic distortion is a major resolution limiting factor for single particle cryo-EM projects. PMID:27270241

  16. Dimensionality Reduction Through Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Tumer, Kagan; Norwig, Peter (Technical Monitor)

    1999-01-01

    In data mining, one often needs to analyze datasets with a very large number of attributes. Performing machine learning directly on such data sets is often impractical because of extensive run times, excessive complexity of the fitted model (often leading to overfitting), and the well-known "curse of dimensionality." In practice, to avoid such problems, feature selection and/or extraction are often used to reduce data dimensionality prior to the learning step. However, existing feature selection/extraction algorithms either evaluate features by their effectiveness across the entire data set or simply disregard class information altogether (e.g., principal component analysis). Furthermore, feature extraction algorithms such as principal components analysis create new features that are often meaningless to human users. In this article, we present input decimation, a method that provides "feature subsets" that are selected for their ability to discriminate among the classes. These features are subsequently used in ensembles of classifiers, yielding results superior to single classifiers, ensembles that use the full set of features, and ensembles based on principal component analysis on both real and synthetic datasets.

  17. Energy-Efficient Neuromorphic Classifiers.

    PubMed

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano

    2016-10-01

    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered.

  18. Energy-Efficient Neuromorphic Classifiers.

    PubMed

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano

    2016-10-01

    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered. PMID:27557100

  19. Automatic algorithm for correcting motion artifacts in time-resolved two-dimensional magnetic resonance angiography using convex projections.

    PubMed

    Raj, Ashish; Zhang, Honglei; Prince, Martin R; Wang, Yi; Zabih, Ramin

    2006-03-01

    Time-resolved contrast enhanced magnetic resonance angiography (MRA) may suffer from involuntary patient motion. It is noted that while MR signal change associated with motion is large in magnitude and has smooth phase variation in k-phase, signal change associated with vascular enhancement is small in magnitude and has rapid phase variation in k-space. Based upon this observation, a novel projection onto convex sets (POCS) algorithm is developed as an automatic iterative method to remove motion artifacts. The presented POCS algorithm consists of high-pass phase filtering and convex projections in both k-space and image space. Without input of detailed motion knowledge, motion effects are filtered out, while vasculature information is preserved. The proposed method can be effective for a large class of nonrigid motions, including through-plane motion. The algorithm is stable and converges quickly, usually within five iterations. A double-blind evaluation on a set of clinical MRA cases shows that a completely unsupervised version of the algorithm produces significantly better rank scores (P=0.038) when compared to angiograms produced manually by an experienced radiologist.

  20. Integrating heterogeneous classifier ensembles for EMG signal decomposition based on classifier agreement.

    PubMed

    Rasheed, Sarbast; Stashuk, Daniel W; Kamel, Mohamed S

    2010-05-01

    In this paper, we present a design methodology for integrating heterogeneous classifier ensembles by employing a diversity-based hybrid classifier fusion approach, whose aggregator module consists of two classifier combiners, to achieve an improved classification performance for motor unit potential classification during electromyographic (EMG) signal decomposition. Following the so-called overproduce and choose strategy to classifier ensemble combination, the developed system allows the construction of a large set of base classifiers, and then automatically chooses subsets of classifiers to form candidate classifier ensembles for each combiner. The system exploits kappa statistic diversity measure to design classifier teams through estimating the level of agreement between base classifier outputs. The pool of base classifiers consists of different kinds of classifiers: the adaptive certainty-based, the adaptive fuzzy k -NN, and the adaptive matched template filter classifiers; and utilizes different types of features. Performance of the developed system was evaluated using real and simulated EMG signals, and was compared with the performance of the constituent base classifiers. Across the EMG signal datasets used, the developed system had better average classification performance overall, especially in terms of reducing classification errors. For simulated signals of varying intensity, the developed system had an average correct classification rate CCr of 93.8% and an error rate Er of 2.2% compared to 93.6% and 3.2%, respectively, for the best base classifier in the ensemble. For simulated signals with varying amounts of shape and/or firing pattern variability, the developed system had a CCr of 89.1% with an Er of 4.7% compared to 86.3% and 5.6%, respectively, for the best classifier. For real signals, the developed system had a CCr of 89.4% with an Er of 3.9% compared to 84.6% and 7.1%, respectively, for the best classifier.

  1. Depth-correction algorithm that improves optical quantification of large breast lesions imaged by diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Tavakoli, Behnoosh; Zhu, Quing

    2011-05-01

    Optical quantification of large lesions imaged with diffuse optical tomography in reflection geometry is depth dependence due to the exponential decay of photon density waves. We introduce a depth-correction method that incorporates the target depth information provided by coregistered ultrasound. It is based on balancing the weight matrix, using the maximum singular values of the target layers in depth without changing the forward model. The performance of the method is evaluated using phantom targets and 10 clinical cases of larger malignant and benign lesions. The results for the homogenous targets demonstrate that the location error of the reconstructed maximum absorption coefficient is reduced to the range of the reconstruction mesh size for phantom targets. Furthermore, the uniformity of absorption distribution inside the lesions improve about two times and the median of the absorption increases from 60 to 85% of its maximum compared to no depth correction. In addition, nonhomogenous phantoms are characterized more accurately. Clinical examples show a similar trend as the phantom results and demonstrate the utility of the correction method for improving lesion quantification.

  2. Improvement of Image Quality and Diagnostic Performance by an Innovative Motion-Correction Algorithm for Prospectively ECG Triggered Coronary CT Angiography

    PubMed Central

    Lu, Bin; Yan, Hong-Bing; Mu, Chao-Wei; Gao, Yang; Hou, Zhi-Hui; Wang, Zhi-Qiang; Liu, Kun; Parinella, Ashley H.; Leipsic, Jonathon A.

    2015-01-01

    Objective To investigate the effect of a novel motion-correction algorithm (Snap-short Freeze, SSF) on image quality and diagnostic accuracy in patients undergoing prospectively ECG-triggered CCTA without administering rate-lowering medications. Materials and Methods Forty-six consecutive patients suspected of CAD prospectively underwent CCTA using prospective ECG-triggering without rate control and invasive coronary angiography (ICA). Image quality, interpretability, and diagnostic performance of SSF were compared with conventional multisegment reconstruction without SSF, using ICA as the reference standard. Results All subjects (35 men, 57.6 ± 8.9 years) successfully underwent ICA and CCTA. Mean heart rate was 68.8±8.4 (range: 50–88 beats/min) beats/min without rate controlling medications during CT scanning. Overall median image quality score (graded 1–4) was significantly increased from 3.0 to 4.0 by the new algorithm in comparison to conventional reconstruction. Overall interpretability was significantly improved, with a significant reduction in the number of non-diagnostic segments (690 of 694, 99.4% vs 659 of 694, 94.9%; P<0.001). However, only the right coronary artery (RCA) showed a statistically significant difference (45 of 46, 97.8% vs 35 of 46, 76.1%; P = 0.004) on a per-vessel basis in this regard. Diagnostic accuracy for detecting ≥50% stenosis was improved using the motion-correction algorithm on per-vessel [96.2% (177/184) vs 87.0% (160/184); P = 0.002] and per-segment [96.1% (667/694) vs 86.6% (601/694); P <0.001] levels, but there was not a statistically significant improvement on a per-patient level [97.8 (45/46) vs 89.1 (41/46); P = 0.203]. By artery analysis, diagnostic accuracy was improved only for the RCA [97.8% (45/46) vs 78.3% (36/46); P = 0.007]. Conclusion The intracycle motion correction algorithm significantly improved image quality and diagnostic interpretability in patients undergoing CCTA with prospective ECG triggering and

  3. Recognition Using Hybrid Classifiers.

    PubMed

    Osadchy, Margarita; Keren, Daniel; Raviv, Dolev

    2016-04-01

    A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply.

  4. A fuzzy classifier system for process control

    NASA Technical Reports Server (NTRS)

    Karr, C. L.; Phillips, J. C.

    1994-01-01

    A fuzzy classifier system that discovers rules for controlling a mathematical model of a pH titration system was developed by researchers at the U.S. Bureau of Mines (USBM). Fuzzy classifier systems successfully combine the strengths of learning classifier systems and fuzzy logic controllers. Learning classifier systems resemble familiar production rule-based systems, but they represent their IF-THEN rules by strings of characters rather than in the traditional linguistic terms. Fuzzy logic is a tool that allows for the incorporation of abstract concepts into rule based-systems, thereby allowing the rules to resemble the familiar 'rules-of-thumb' commonly used by humans when solving difficult process control and reasoning problems. Like learning classifier systems, fuzzy classifier systems employ a genetic algorithm to explore and sample new rules for manipulating the problem environment. Like fuzzy logic controllers, fuzzy classifier systems encapsulate knowledge in the form of production rules. The results presented in this paper demonstrate the ability of fuzzy classifier systems to generate a fuzzy logic-based process control system.

  5. Classifying Cereal Data

    Cancer.gov

    The DSQ includes questions about cereal intake and allows respondents up to two responses on which cereals they consume. We classified each cereal reported first by hot or cold, and then along four dimensions: density of added sugars, whole grains, fiber, and calcium.

  6. Classifying Adolescent Perfectionists

    ERIC Educational Resources Information Center

    Rice, Kenneth G.; Ashby, Jeffrey S.; Gilman, Rich

    2011-01-01

    A large school-based sample of 9th-grade adolescents (N = 875) completed the Almost Perfect Scale-Revised (APS-R; Slaney, Mobley, Trippi, Ashby, & Johnson, 1996). Decision rules and cut-scores were developed and replicated that classify adolescents as one of two kinds of perfectionists (adaptive or maladaptive) or as nonperfectionists. A…

  7. Number in Classifier Languages

    ERIC Educational Resources Information Center

    Nomoto, Hiroki

    2013-01-01

    Classifier languages are often described as lacking genuine number morphology and treating all common nouns, including those conceptually count, as an unindividuated mass. This study argues that neither of these popular assumptions is true, and presents new generalizations and analyses gained by abandoning them. I claim that no difference exists…

  8. A novel semi-supervised hyperspectral image classification approach based on spatial neighborhood information and classifier combination

    NASA Astrophysics Data System (ADS)

    Tan, Kun; Hu, Jun; Li, Jun; Du, Peijun

    2015-07-01

    In the process of semi-supervised hyperspectral image classification, spatial neighborhood information of training samples is widely applied to solve the small sample size problem. However, the neighborhood information of unlabeled samples is usually ignored. In this paper, we propose a new algorithm for hyperspectral image semi-supervised classification in which the spatial neighborhood information is combined with classifier to enhance the classification ability in determining the class label of the selected unlabeled samples. There are two key points in this algorithm: (1) it is considered that the correct label should appear in the spatial neighborhood of unlabeled samples; (2) the combination of classifier can obtains better results. Two classifiers multinomial logistic regression (MLR) and k-nearest neighbor (KNN) are combined together in the above way to further improve the performance. The performance of the proposed approach was assessed with two real hyperspectral data sets, and the obtained results indicate that the proposed approach is effective for hyperspectral classification.

  9. A generalised background correction algorithm for a Halo Doppler lidar and its application to data from Finland

    NASA Astrophysics Data System (ADS)

    Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville; Petäjä, Tuukka

    2016-03-01

    Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Any bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. The reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.

  10. A generalised background correction algorithm for a Halo Doppler lidar and its application to data from Finland

    NASA Astrophysics Data System (ADS)

    Manninen, A. J.; O'Connor, E. J.; Vakkari, V.; Petäjä, T.

    2015-10-01

    Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allow turbulent properties to be obtained from studying the variation in velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Any bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. The reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.

  11. Recognition of pornographic web pages by classifying texts and images.

    PubMed

    Hu, Weiming; Wu, Ou; Chen, Zhouyao; Fu, Zhouyu; Maybank, Steve

    2007-06-01

    With the rapid development of the World Wide Web, people benefit more and more from the sharing of information. However, Web pages with obscene, harmful, or illegal content can be easily accessed. It is important to recognize such unsuitable, offensive, or pornographic Web pages. In this paper, a novel framework for recognizing pornographic Web pages is described. A C4.5 decision tree is used to divide Web pages, according to content representations, into continuous text pages, discrete text pages, and image pages. These three categories of Web pages are handled, respectively, by a continuous text classifier, a discrete text classifier, and an algorithm that fuses the results from the image classifier and the discrete text classifier. In the continuous text classifier, statistical and semantic features are used to recognize pornographic texts. In the discrete text classifier, the naive Bayes rule is used to calculate the probability that a discrete text is pornographic. In the image classifier, the object's contour-based features are extracted to recognize pornographic images. In the text and image fusion algorithm, the Bayes theory is used to combine the recognition results from images and texts. Experimental results demonstrate that the continuous text classifier outperforms the traditional keyword-statistics-based classifier, the contour-based image classifier outperforms the traditional skin-region-based image classifier, the results obtained by our fusion algorithm outperform those by either of the individual classifiers, and our framework can be adapted to different categories of Web pages. PMID:17431300

  12. SU-E-I-05: A Correction Algorithm for Kilovoltage Cone-Beam Computed Tomography Dose Calculations in Cervical Cancer Patients

    SciTech Connect

    Zhang, J; Zhang, W; Lu, J

    2015-06-15

    Purpose: To investigate the accuracy and feasibility of dose calculations using kilovoltage cone beam computed tomography in cervical cancer radiotherapy using a correction algorithm. Methods: The Hounsfield units (HU) and electron density (HU-density) curve was obtained for both planning CT (pCT) and kilovoltage cone beam CT (CBCT) using a CIRS-062 calibration phantom. The pCT and kV-CBCT images have different HU values, and if the HU-density curve of CBCT was directly used to calculate dose in CBCT images may have a deviation on dose distribution. It is necessary to normalize the different HU values between pCT and CBCT. A HU correction algorithm was used for CBCT images (cCBCT). Fifteen intensity-modulated radiation therapy (IMRT) plans of cervical cancer were chosen, and the plans were transferred to the pCT and cCBCT data sets without any changes for dose calculations. Phantom and patient studies were carried out. The dose differences and dose distributions were compared between cCBCT plan and pCT plan. Results: The HU number of CBCT was measured by several times, and the maximum change was less than 2%. To compare with pCT, the CBCT and cCBCT has a discrepancy, the dose differences in CBCT and cCBCT images were 2.48%±0.65% (range: 1.3%∼3.8%) and 0.48%±0.21% (range: 0.1%∼0.82%) for phantom study, respectively. For dose calculation in patient images, the dose differences were 2.25%±0.43% (range: 1.4%∼3.4%) and 0.63%±0.35% (range: 0.13%∼0.97%), respectively. And for the dose distributions, the passing rate of cCBCT was higher than the CBCTs. Conclusion: The CBCT image for dose calculation is feasible in cervical cancer radiotherapy, and the correction algorithm offers acceptable accuracy. It will become a useful tool for adaptive radiation therapy.

  13. Algorithm for x-ray beam hardening and scatter correction in low-dose cone-beam CT: phantom studies

    NASA Astrophysics Data System (ADS)

    Liu, Wenlei; Rong, Junyan; Gao, Peng; Liao, Qimei; Lu, HongBing

    2016-03-01

    X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), as well as beam hardening, resulting in image artifacts, contrast reduction, and lack of CT number accuracy. Meanwhile the x-ray radiation dose is also non-ignorable. Considerable scatter or beam hardening correction methods have been developed, independently, and rarely combined with low-dose CT reconstruction. In this paper, we combine scatter suppression with beam hardening correction for sparse-view CT reconstruction to improve CT image quality and reduce CT radiation. Firstly, scatter was measured, estimated, and removed using measurement-based methods, assuming that signal in the lead blocker shadow is only attributable to x-ray scatter. Secondly, beam hardening was modeled by estimating an equivalent attenuation coefficient at the effective energy, which was integrated into the forward projector of the algebraic reconstruction technique (ART). Finally, the compressed sensing (CS) iterative reconstruction is carried out for sparse-view CT reconstruction to reduce the CT radiation. Preliminary Monte Carlo simulated experiments indicate that with only about 25% of conventional dose, our method reduces the magnitude of cupping artifact by a factor of 6.1, increases the contrast by a factor of 1.4 and the CNR by a factor of 15. The proposed method could provide good reconstructed image from a few view projections, with effective suppression of artifacts caused by scatter and beam hardening, as well as reducing the radiation dose. With this proposed framework and modeling, it may provide a new way for low-dose CT imaging.

  14. Computerized classified document accountability

    SciTech Connect

    Norris, C.B.; Lewin, R.

    1988-08-01

    This step-by-step procedure was established as a guideline to be used with the Savvy PC Database Program for the accountability of classified documents. Its purpose is to eventually phase out the use of logbooks for classified document tracking. The program runs on an IBM PC or compatible computer using a Bernoulli Box, a Hewlett Packard 71B Bar Code Reader, an IOMEGA Host Adapter Board for creating mirror images of data for backup purposes, and the Disk Operating System (DOS). The DOS batch files ''IN'' and ''OUT'' invoke the Savvy Databases for either entering incoming or outgoing documents. The main files are DESTRUCTION, INLOG, OUTLOG, and NAME-NUMBER. The fields in the files are Adding/Changing, Routing, Destroying, Search-Print by document identification, Search/Print Audit by bar code number, Print Holdings of a person, and Print Inventory of an office.

  15. Cascaded classifier for large-scale data applied to automatic segmentation of articular cartilage

    NASA Astrophysics Data System (ADS)

    Prasoon, Adhish; Igel, Christian; Loog, Marco; Lauze, François; Dam, Erik; Nielsen, Mads

    2012-02-01

    Many classification/segmentation tasks in medical imaging are particularly challenging for machine learning algorithms because of the huge amount of training data required to cover biological variability. Learning methods scaling badly in the number of training data points may not be applicable. This may exclude powerful classifiers with good generalization performance such as standard non-linear support vector machines (SVMs). Further, many medical imaging problems have highly imbalanced class populations, because the object to be segmented has only few pixels/voxels compared to the background. This article presents a two-stage classifier for large-scale medical imaging problems. In the first stage, a classifier that is easily trainable on large data sets is employed. The class imbalance is exploited and the classifier is adjusted to correctly detect background with a very high accuracy. Only the comparatively few data points not identified as background are passed to the second stage. Here a powerful classifier with high training time complexity can be employed for making the final decision whether a data point belongs to the object or not. We applied our method to the problem of automatically segmenting tibial articular cartilage from knee MRI scans. We show that by using nearest neighbor (kNN) in the first stage we can reduce the amount of data for training a non-linear SVM in the second stage. The cascaded system achieves better results than the state-of-the-art method relying on a single kNN classifier.

  16. A simulation based approach to optimize inventory replenishment with RAND algorithm: An extended study of corrected demand using Holt's method for textile industry

    NASA Astrophysics Data System (ADS)

    Morshed, Mohammad Sarwar; Kamal, Mostafa Mashnoon; Khan, Somaiya Islam

    2016-07-01

    Inventory has been a major concern in supply chain and numerous researches have been done lately on inventory control which brought forth a number of methods that efficiently manage inventory and related overheads by reducing cost of replenishment. This research is aimed towards providing a better replenishment policy in case of multi-product, single supplier situations for chemical raw materials of textile industries in Bangladesh. It is assumed that industries currently pursue individual replenishment system. The purpose is to find out the optimum ideal cycle time and individual replenishment cycle time of each product for replenishment that will cause lowest annual holding and ordering cost, and also find the optimum ordering quantity. In this paper indirect grouping strategy has been used. It is suggested that indirect grouping Strategy outperforms direct grouping strategy when major cost is high. An algorithm by Kaspi and Rosenblatt (1991) called RAND is exercised for its simplicity and ease of application. RAND provides an ideal cycle time (T) for replenishment and integer multiplier (ki) for individual items. Thus the replenishment cycle time for each product is found as T×ki. Firstly, based on data, a comparison between currently prevailing (individual) process and RAND is provided that uses the actual demands which presents 49% improvement in total cost of replenishment. Secondly, discrepancies in demand is corrected by using Holt's method. However, demands can only be forecasted one or two months into the future because of the demand pattern of the industry under consideration. Evidently, application of RAND with corrected demand display even greater improvement. The results of this study demonstrates that cost of replenishment can be significantly reduced by applying RAND algorithm and exponential smoothing models.

  17. A novel fuzzy logic correctional algorithm for traction control systems on uneven low-friction road conditions

    NASA Astrophysics Data System (ADS)

    Li, Liang; Ran, Xu; Wu, Kaihui; Song, Jian; Han, Zongqi

    2015-06-01

    The traction control system (TCS) might prevent excessive skid of the driving wheels so as to enhance the driving performance and direction stability of the vehicle. But if driven on an uneven low-friction road, the vehicle body often vibrates severely due to the drastic fluctuations of driving wheels, and then the vehicle comfort might be reduced greatly. The vibrations could be hardly removed with traditional drive-slip control logic of the TCS. In this paper, a novel fuzzy logic controller has been brought forward, in which the vibration signals of the driving wheels are adopted as new controlled variables, and then the engine torque and the active brake pressure might be coordinately re-adjusted besides the basic logic of a traditional TCS. In the proposed controller, an adjustable engine torque and pressure compensation loop are adopted to constrain the drastic vehicle vibration. Thus, the wheel driving slips and the vibration degrees might be adjusted synchronously and effectively. The simulation results and the real vehicle tests validated that the proposed algorithm is effective and adaptable for a complicated uneven low-friction road.

  18. The Algorithm Theoretical Basis Document for the Atmospheric Delay Correction to GLAS Laser Altimeter Ranges. Volume 8

    NASA Technical Reports Server (NTRS)

    Herring, Thomas A.; Quinn, Katherine J.

    2012-01-01

    NASA s Ice, Cloud, and Land Elevation Satellite (ICESat) mission will be launched late 2001. It s primary instrument is the Geoscience Laser Altimeter System (GLAS) instrument. The main purpose of this instrument is to measure elevation changes of the Greenland and Antarctic icesheets. To accurately measure the ranges it is necessary to correct for the atmospheric delay of the laser pulses. The atmospheric delay depends on the integral of the refractive index along the path that the laser pulse travels through the atmosphere. The refractive index of air at optical wavelengths is a function of density and molecular composition. For ray paths near zenith and closed form equations for the refractivity, the atmospheric delay can be shown to be directly related to surface pressure and total column precipitable water vapor. For ray paths off zenith a mapping function relates the delay to the zenith delay. The closed form equations for refractivity recommended by the International Union of Geodesy and Geophysics (IUGG) are optimized for ground based geodesy techniques and in the next section we will consider whether these equations are suitable for satellite laser altimetry.

  19. New results in semi-supervised learning using adaptive classifier fusion

    NASA Astrophysics Data System (ADS)

    Lynch, Robert; Willett, Peter

    2014-05-01

    In typical classification problems the data used to train a model for each class is often correctly labeled, and so that fully supervised learning can be utilized. For example, many illustrative labeled data sets can be found at sources such as the UCI Repository for Machine Learning (http://archive.ics.uci.edu/ml/), or at the Keel Data Set Repository (http://www.keel.es). However, increasingly many real world classification problems involve data that contain both labeled and unlabeled samples. In the latter case, the data samples are assumed to be missing all class label information, and when used as training data these samples are considered to be of unknown origin (i.e., to the learning system, actual class membership is completely unknown). Typically, when presented with a classification problem containing both labeled and unlabeled training samples, a technique that is often used is to throw out the unlabeled data. In other words, the unlabeled data are not included with existing labeled data for learning, and which can result in a poorly trained classifier that does not reach its full performance potential. In most cases, the primary reason that unlabeled data are not often used for training is that, and depending on the classifier, the correct optimal model for semi-supervised classification (i.e., a classifier that learns class membership using both labeled and unlabeled samples) can be far too complicated to develop. In previous work, results were shown based on the fusion of binary classifiers to improve performance in multiclass classification problems. In this case, Bayesian methods were used to fuse binary classifier fusion outputs, while selecting the most relevant classifier pairs to improve the overall classifier decision space. Here, this work is extended by developing new algorithms for improving semi-supervised classification performance. Results are demonstrated with real data form the UCI and Keel Repositories.

  20. Crystal and molecular structures of selected organic and organometallic compounds and an algorithm for empirical absorption correction

    SciTech Connect

    Karcher, B.

    1981-10-01

    Cr(CO)/sub 5/(SCMe/sub 2/) crystallizes in the monoclinic space group P2/sub 1//a with a = 10.468(8), b = 11.879(5), c = 9.575(6) A, and ..beta.. = 108.14(9)/sup 0/, with an octahedral coordination around the chromium atom. PSN/sub 3/C/sub 6/H/sub 12/ crystallizes in the monoclinic space group P2/sub 1//n with a = 10.896(1), b = 11.443(1), c = 7.288(1) A, and ..beta.. = 104.45(1)/sup 0/. Each of the five-membered rings in this structure contains a carbon atom which is puckered toward the sulfur and out of the nearly planar arrays of the remaining ring atoms. (RhO/sub 4/N/sub 4/C/sub 48/H/sub 56/)/sup +/(BC/sub 24/H/sub 20/)/sup -/.1.5NC/sub 2/H/sub 3/ crystallizes in the triclinic space group P1 with a = 17.355(8), b = 21.135(10), c = 10.757(5) A, ..cap alpha.. = 101.29(5), ..beta.. = 98.36(5), and ..gamma.. = 113.92(4)/sup 0/. Each Rh cation complex is a monomer. MoP/sub 2/O/sub 10/C/sub 16/H/sub 22/ crystallizes in the monoclinic space group P2/sub 1//c with a = 12.220(3), b = 9.963(2), c = 20.150(6) A, and ..beta.. = 103.01(3)/sup 0/. The molybdenum atom occupies the axial position of the six-membered ring of each of the two phosphorinane ligands. An empirical absorption correction program was written.

  1. A General Fuzzy Cerebellar Model Neural Network Multidimensional Classifier Using Intuitionistic Fuzzy Sets for Medical Identification

    PubMed Central

    Zhao, Jing; Lin, Lo-Yi

    2016-01-01

    The diversity of medical factors makes the analysis and judgment of uncertainty one of the challenges of medical diagnosis. A well-designed classification and judgment system for medical uncertainty can increase the rate of correct medical diagnosis. In this paper, a new multidimensional classifier is proposed by using an intelligent algorithm, which is the general fuzzy cerebellar model neural network (GFCMNN). To obtain more information about uncertainty, an intuitionistic fuzzy linguistic term is employed to describe medical features. The solution of classification is obtained by a similarity measurement. The advantages of the novel classifier proposed here are drawn out by comparing the same medical example under the methods of intuitionistic fuzzy sets (IFSs) and intuitionistic fuzzy cross-entropy (IFCE) with different score functions. Cross verification experiments are also taken to further test the classification ability of the GFCMNN multidimensional classifier. All of these experimental results show the effectiveness of the proposed GFCMNN multidimensional classifier and point out that it can assist in supporting for correct medical diagnoses associated with multiple categories. PMID:27298619

  2. A General Fuzzy Cerebellar Model Neural Network Multidimensional Classifier Using Intuitionistic Fuzzy Sets for Medical Identification.

    PubMed

    Zhao, Jing; Lin, Lo-Yi; Lin, Chih-Min

    2016-01-01

    The diversity of medical factors makes the analysis and judgment of uncertainty one of the challenges of medical diagnosis. A well-designed classification and judgment system for medical uncertainty can increase the rate of correct medical diagnosis. In this paper, a new multidimensional classifier is proposed by using an intelligent algorithm, which is the general fuzzy cerebellar model neural network (GFCMNN). To obtain more information about uncertainty, an intuitionistic fuzzy linguistic term is employed to describe medical features. The solution of classification is obtained by a similarity measurement. The advantages of the novel classifier proposed here are drawn out by comparing the same medical example under the methods of intuitionistic fuzzy sets (IFSs) and intuitionistic fuzzy cross-entropy (IFCE) with different score functions. Cross verification experiments are also taken to further test the classification ability of the GFCMNN multidimensional classifier. All of these experimental results show the effectiveness of the proposed GFCMNN multidimensional classifier and point out that it can assist in supporting for correct medical diagnoses associated with multiple categories. PMID:27298619

  3. Generating compact classifier systems using a simple artificial immune system.

    PubMed

    Leung, Kevin; Cheong, France; Cheong, Christopher

    2007-10-01

    Current artificial immune system (AIS) classifiers have two major problems: 1) their populations of B-cells can grow to huge proportions, and 2) optimizing one B-cell (part of the classifier) at a time does not necessarily guarantee that the B-cell pool (the whole classifier) will be optimized. In this paper, the design of a new AIS algorithm and classifier system called simple AIS is described. It is different from traditional AIS classifiers in that it takes only one B-cell, instead of a B-cell pool, to represent the classifier. This approach ensures global optimization of the whole system, and in addition, no population control mechanism is needed. The classifier was tested on seven benchmark data sets using different classification techniques and was found to be very competitive when compared to other classifiers.

  4. Training a CAD classifier with correlated data

    NASA Astrophysics Data System (ADS)

    Dundar, Murat; Krishnapuram, Balaji; Wolf, Matthias; Lakare, Sarang; Bogoni, Luca; Bi, Jinbo; Rao, R. Bharat

    2007-03-01

    Most methods for classifier design assume that the training samples are drawn independently and identically from an unknown data generating distribution (i.i.d.), although this assumption is violated in several real life problems. Relaxing this i.i.d. assumption, we develop training algorithms for the more realistic situation where batches or sub-groups of training samples may have internal correlations, although the samples from different batches may be considered to be uncorrelated; we also consider the extension to cases with hierarchical--i.e. higher order--correlation structure between batches of training samples. After describing efficient algorithms that scale well to large datasets, we provide some theoretical analysis to establish their validity. Experimental results from real-life Computer Aided Detection (CAD) problems indicate that relaxing the i.i.d. assumption leads to statistically significant improvements in the accuracy of the learned classifier.

  5. Classification of Horse Gaits Using FCM-Based Neuro-Fuzzy Classifier from the Transformed Data Information of Inertial Sensor.

    PubMed

    Lee, Jae-Neung; Lee, Myung-Won; Byeon, Yeong-Hyeon; Lee, Won-Sik; Kwak, Keun-Chang

    2016-01-01

    In this study, we classify four horse gaits (walk, sitting trot, rising trot, canter) of three breeds of horse (Jeju, Warmblood, and Thoroughbred) using a neuro-fuzzy classifier (NFC) of the Takagi-Sugeno-Kang (TSK) type from data information transformed by a wavelet packet (WP). The design of the NFC is accomplished by using a fuzzy c-means (FCM) clustering algorithm that can solve the problem of dimensionality increase due to the flexible scatter partitioning. For this purpose, we use the rider's hip motion from the sensor information collected by inertial sensors as feature data for the classification of a horse's gaits. Furthermore, we develop a coaching system under both real horse riding and simulator environments and propose a method for analyzing the rider's motion. Using the results of the analysis, the rider can be coached in the correct motion corresponding to the classified gait. To construct a motion database, the data collected from 16 inertial sensors attached to a motion capture suit worn by one of the country's top-level horse riding experts were used. Experiments using the original motion data and the transformed motion data were conducted to evaluate the classification performance using various classifiers. The experimental results revealed that the presented FCM-NFC showed a better accuracy performance (97.5%) than a neural network classifier (NNC), naive Bayesian classifier (NBC), and radial basis function network classifier (RBFNC) for the transformed motion data. PMID:27171098

  6. Classification of Horse Gaits Using FCM-Based Neuro-Fuzzy Classifier from the Transformed Data Information of Inertial Sensor

    PubMed Central

    Lee, Jae-Neung; Lee, Myung-Won; Byeon, Yeong-Hyeon; Lee, Won-Sik; Kwak, Keun-Chang

    2016-01-01

    In this study, we classify four horse gaits (walk, sitting trot, rising trot, canter) of three breeds of horse (Jeju, Warmblood, and Thoroughbred) using a neuro-fuzzy classifier (NFC) of the Takagi-Sugeno-Kang (TSK) type from data information transformed by a wavelet packet (WP). The design of the NFC is accomplished by using a fuzzy c-means (FCM) clustering algorithm that can solve the problem of dimensionality increase due to the flexible scatter partitioning. For this purpose, we use the rider’s hip motion from the sensor information collected by inertial sensors as feature data for the classification of a horse’s gaits. Furthermore, we develop a coaching system under both real horse riding and simulator environments and propose a method for analyzing the rider’s motion. Using the results of the analysis, the rider can be coached in the correct motion corresponding to the classified gait. To construct a motion database, the data collected from 16 inertial sensors attached to a motion capture suit worn by one of the country’s top-level horse riding experts were used. Experiments using the original motion data and the transformed motion data were conducted to evaluate the classification performance using various classifiers. The experimental results revealed that the presented FCM-NFC showed a better accuracy performance (97.5%) than a neural network classifier (NNC), naive Bayesian classifier (NBC), and radial basis function network classifier (RBFNC) for the transformed motion data. PMID:27171098

  7. Evaluation and Analysis of SEASAT-A Scanning Multichannel Microwave Radiometer (SSMR) Antenna Pattern Correction (APC) Algorithm. Sub-task 4: Interim Mode T Sub B Versus Cross and Nominal Mode T Sub B

    NASA Technical Reports Server (NTRS)

    Kitzis, J. L.; Kitzis, S. N.

    1979-01-01

    The brightness temperature data produced by the SMMR Antenna Pattern Correction algorithm are evaluated. The evaluation consists of: (1) a direct comparison of the outputs of the interim, cross, and nominal APC modes; (2) a refinement of the previously determined cos beta estimates; and (3) a comparison of the world brightness temperature (T sub B) map with actual SMMR measurements.

  8. Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level

    PubMed Central

    Arshad, Sannia; Rho, Seungmin

    2014-01-01

    We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes. PMID:25295302

  9. Validity of lung correction algorithms

    SciTech Connect

    Tang, W.L.; Khan, F.M.; Gerbi, B.J.

    1986-09-01

    Our studies have compared the ''effective tissue--air ratio (TAR) method'' (ICRU Report No. 24), ''equivalent TAR method,'' and the ''generalized Batho method'' (currently used by the TP-11 computer treatment planning system) with measured results for different energy photon beams using two lung inhomogeneities to simulate a lateral chest field. Significant differences on the order of 3%--15% were found when comparing these various methods with measured values.

  10. New algorithm for efficient pattern recall using a static threshold with the Steinbuch Lernmatrix

    NASA Astrophysics Data System (ADS)

    Juan Carbajal Hernández, José; Sánchez Fernández, Luis P.

    2011-03-01

    An associative memory is a binary relationship between inputs and outputs, which is stored in an M matrix. The fundamental purpose of an associative memory is to recover correct output patterns from input patterns, which can be altered by additive, subtractive or combined noise. The Steinbuch Lernmatrix was the first associative memory developed in 1961, and is used as a pattern recognition classifier. However, a misclassification problem is presented when crossbar saturation occurs. A new algorithm that corrects the misclassification in the Lernmatrix is proposed in this work. The results of crossbar saturation with fundamental patterns demonstrate a better performance of pattern recalling using the new algorithm. Experiments with real data show a more efficient classifier when the algorithm is introduced in the original Lernmatrix. Therefore, the thresholded Lernmatrix memory emerges as a suitable and alternative classifier to be used in the developing pattern processing field.

  11. Jitter Correction

    NASA Technical Reports Server (NTRS)

    Waegell, Mordecai J.; Palacios, David M.

    2011-01-01

    Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter

  12. A Comparison of Unsupervised Classifiers on BATSE Catalog Data

    NASA Astrophysics Data System (ADS)

    Hakkila, Jon; Roiger, Richard J.; Haglin, David J.; Giblin, Timothy W.; Paciesas, William S.

    2003-04-01

    We classify BATSE gamma-ray bursts using unsupervised clustering algorithms in order to compare classification with statistical clustering techniques. BATSE bursts detected with homogeneous trigger criteria and measured with a limited attribute set (duration, hardness, and fluence) are classified using four unsupervised algorithms (the concept hierarchy classifier ESX, the EM algorithm, the Kmeans algorithm, and a kohonen neural network). The classifiers prefer three-class solutions to two-class and four-class solutions. When forced to find two classes, the classifiers do not find the traditional long and short classes; many short soft events are placed in a class with the short hard bursts. When three classes are found, the classifiers clearly identify the short bursts, but place far more members in an intermediate duration soft class than have been found using statistical clustering techniques. It appears that the boundary between short faint and long bright bursts is more important to the classifiers than is the boundary between short hard and long soft bursts. We conclude that the boundary between short faint and long hard bursts is the result of data bias and poor attribute selection. We recommend that future gamma-ray burst classification avoid using extrinsic parameters such as fluence, and should instead concentrate on intrinsic properties such as spectral, temporal, and (when available) luminosity characteristics. Future classification should also be wary of correlated attributes (such as fluence and duration), as these bias classification results.

  13. Does the traditional snakebite severity score correctly classify envenomated patients?

    PubMed Central

    Kang, Seungho; Moon, Jeongmi; Chun, Byeongjo

    2016-01-01

    Objective This study aims to help set domestic guidelines for administration of antivenom to envenomated patients after snakebites. Methods This retrospective observational case series comprised 128 patients with snake envenomation. The patients were divided into two groups according to the need for additional antivenom after the initial treatment based on the traditional snakebite severity grading scale. One group successfully recovered after the initial treatment and did not need any additional antivenom (n=85) and the other needed an additional administration of antivenom (n=43). Results The group requiring additional administration of antivenom showed a higher local effect score and a traditional snakebite severity grade at presentation, a shorter prothrombin and activated partial prothrombin time, a higher frequency of rhabdomyolysis and disseminated intravascular coagulopathy, and longer hospitalization than the group that did not need additional antivenom. The most common cause for additional administration was the progression of local symptoms. The independent factor that was associated with the need for additional antivenom was the local effect pain score (odds ratio, 2.477; 95% confidence interval, 1.309 to 4.689). The optimal cut-off value of the local effect pain score was 1.5 with 62.8% sensitivity and 71.8% specificity. Conclusion When treating patients who are envenomated by a snake, and when using the traditional snakebite severity scale, the local effect pain score should be taken into account. If the score is more than 2, additional antivenom should be considered and the patient should be frequently assessed. PMID:27752613

  14. Characterization of aluminum hydroxide particles from the Bayer process using neural network and Bayesian classifiers.

    PubMed

    Zaknich, A

    1997-01-01

    An automatic process of isolating and characterizing individual aluminum hydroxide particles from the Bayer process in scanning electron microscope gray-scale images of samples is described. It uses image processing algorithms, neural nets and Bayesian classifiers. As the particles are amorphous and different greatly, there were complex nonlinear decisions and anomalies. The process is in two stages; isolation of particles, and classification of each particle. The isolation process correctly identifies 96.9% of the objects as complete and single particles after a 15.5% rejection of questionable objects. The sample set had a possible 2455 particles taken from 384 256x256-pixel images. Of the 15.5%, 14.2% were correctly rejected. With no rejection the accuracy drops to 91.8% which represents the accuracy of the isolation process alone. The isolated particles are classified by shape, single crystal protrusions, texture, crystal size, and agglomeration. The particle samples were preclassified by a human expert and the data were used to train the five classifiers to embody the expert knowledge. The system was designed to be used as a research tool to determine and study relationships between particle properties and plant parameters in the production of smelting grade alumina by the Bayer process.

  15. Learning algorithms for both real-time detection of solder shorts and for SPC measurement correction using cross-sectional x-ray images of PCBA solder joints

    NASA Astrophysics Data System (ADS)

    Roder, Paul A.

    1994-03-01

    Learning algorithms are introduced for use in the inspection of cross-sectional X-ray images of solder joints. These learning algorithms improve measurement accuracy by accounting for localized shading effects that can occur when inspecting double- sided printed circuit board assemblies. Two specific examples are discussed. The first is an algorithm for detection of solder short defects. The second algorithm utilizes learning to generate more accurate statistical process control measurements.

  16. Optimization of short amino acid sequences classifier

    NASA Astrophysics Data System (ADS)

    Barcz, Aleksy; Szymański, Zbigniew

    This article describes processing methods used for short amino acid sequences classification. The data processed are 9-symbols string representations of amino acid sequences, divided into 49 data sets - each one containing samples labeled as reacting or not with given enzyme. The goal of the classification is to determine for a single enzyme, whether an amino acid sequence would react with it or not. Each data set is processed separately. Feature selection is performed to reduce the number of dimensions for each data set. The method used for feature selection consists of two phases. During the first phase, significant positions are selected using Classification and Regression Trees. Afterwards, symbols appearing at the selected positions are substituted with numeric values of amino acid properties taken from the AAindex database. In the second phase the new set of features is reduced using a correlation-based ranking formula and Gram-Schmidt orthogonalization. Finally, the preprocessed data is used for training LS-SVM classifiers. SPDE, an evolutionary algorithm, is used to obtain optimal hyperparameters for the LS-SVM classifier, such as error penalty parameter C and kernel-specific hyperparameters. A simple score penalty is used to adapt the SPDE algorithm to the task of selecting classifiers with best performance measures values.

  17. Chlorophyll-a concentration estimation with three bio-optical algorithms: correction for the low concentration range for the Yiam Reservoir, Korea

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Bio-optical algorithms have been applied to monitor water quality in surface water systems. Empirical algorithms, such as Ritchie (2008), Gons (2008), and Gilerson (2010), have been applied to estimate the chlorophyll-a (chl-a) concentrations. However, the performance of each algorithm severely degr...

  18. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  19. Emergent behaviors of classifier systems

    SciTech Connect

    Forrest, S.; Miller, J.H.

    1989-01-01

    This paper discusses some examples of emergent behavior in classifier systems, describes some recently developed methods for studying them based on dynamical systems theory, and presents some initial results produced by the methodology. The goal of this work is to find techniques for noticing when interesting emergent behaviors of classifier systems emerge, to study how such behaviors might emerge over time, and make suggestions for designing classifier systems that exhibit preferred behaviors. 20 refs., 1 fig.

  20. Visual Classifier Training for Text Document Retrieval.

    PubMed

    Heimerl, F; Koch, S; Bosch, H; Ertl, T

    2012-12-01

    Performing exhaustive searches over a large number of text documents can be tedious, since it is very hard to formulate search queries or define filter criteria that capture an analyst's information need adequately. Classification through machine learning has the potential to improve search and filter tasks encompassing either complex or very specific information needs, individually. Unfortunately, analysts who are knowledgeable in their field are typically not machine learning specialists. Most classification methods, however, require a certain expertise regarding their parametrization to achieve good results. Supervised machine learning algorithms, in contrast, rely on labeled data, which can be provided by analysts. However, the effort for labeling can be very high, which shifts the problem from composing complex queries or defining accurate filters to another laborious task, in addition to the need for judging the trained classifier's quality. We therefore compare three approaches for interactive classifier training in a user study. All of the approaches are potential candidates for the integration into a larger retrieval system. They incorporate active learning to various degrees in order to reduce the labeling effort as well as to increase effectiveness. Two of them encompass interactive visualization for letting users explore the status of the classifier in context of the labeled documents, as well as for judging the quality of the classifier in iterative feedback loops. We see our work as a step towards introducing user controlled classification methods in addition to text search and filtering for increasing recall in analytics scenarios involving large corpora.

  1. Classifying bed inclination using pressure images.

    PubMed

    Baran Pouyan, M; Ostadabbas, S; Nourani, M; Pompeo, M

    2014-01-01

    Pressure ulcer is one of the most prevalent problems for bed-bound patients in hospitals and nursing homes. Pressure ulcers are painful for patients and costly for healthcare systems. Accurate in-bed posture analysis can significantly help in preventing pressure ulcers. Specifically, bed inclination (back angle) is a factor contributing to pressure ulcer development. In this paper, an efficient methodology is proposed to classify bed inclination. Our approach uses pressure values collected from a commercial pressure mat system. Then, by applying a number of image processing and machine learning techniques, the approximate degree of bed is estimated and classified. The proposed algorithm was tested on 15 subjects with various sizes and weights. The experimental results indicate that our method predicts bed inclination in three classes with 80.3% average accuracy.

  2. A three-parameter model for classifying anurans into four genera based on advertisement calls.

    PubMed

    Gingras, Bruno; Fitch, William Tecumseh

    2013-01-01

    The vocalizations of anurans are innate in structure and may therefore contain indicators of phylogenetic history. Thus, advertisement calls of species which are more closely related phylogenetically are predicted to be more similar than those of distant species. This hypothesis was evaluated by comparing several widely used machine-learning algorithms. Recordings of advertisement calls from 142 species belonging to four genera were analyzed. A logistic regression model, using mean values for dominant frequency, coefficient of variation of root-mean square energy, and spectral flux, correctly classified advertisement calls with regard to genus with an accuracy above 70%. Similar accuracy rates were obtained using these parameters with a support vector machine model, a K-nearest neighbor algorithm, and a multivariate Gaussian distribution classifier, whereas a Gaussian mixture model performed slightly worse. In contrast, models based on mel-frequency cepstral coefficients did not fare as well. Comparable accuracy levels were obtained on out-of-sample recordings from 52 of the 142 original species. The results suggest that a combination of low-level acoustic attributes is sufficient to discriminate efficiently between the vocalizations of these four genera, thus supporting the initial premise and validating the use of high-throughput algorithms on animal vocalizations to evaluate phylogenetic hypotheses. PMID:23297926

  3. Mapping algorithms on regular parallel architectures

    SciTech Connect

    Lee, P.

    1989-01-01

    It is significant that many of time-intensive scientific algorithms are formulated as nested loops, which are inherently regularly structured. In this dissertation the relations between the mathematical structure of nested loop algorithms and the architectural capabilities required for their parallel execution are studied. The architectural model considered in depth is that of an arbitrary dimensional systolic array. The mathematical structure of the algorithm is characterized by classifying its data-dependence vectors according to the new ZERO-ONE-INFINITE property introduced. Using this classification, the first complete set of necessary and sufficient conditions for correct transformation of a nested loop algorithm onto a given systolic array of an arbitrary dimension by means of linear mappings is derived. Practical methods to derive optimal or suboptimal systolic array implementations are also provided. The techniques developed are used constructively to develop families of implementations satisfying various optimization criteria and to design programmable arrays efficiently executing classes of algorithms. In addition, a Computer-Aided Design system running on SUN workstations has been implemented to help in the design. The methodology, which deals with general algorithms, is illustrated by synthesizing linear and planar systolic array algorithms for matrix multiplication, a reindexed Warshall-Floyd transitive closure algorithm, and the longest common subsequence algorithm.

  4. Learnability of min-max pattern classifiers

    NASA Astrophysics Data System (ADS)

    Yang, Ping-Fai; Maragos, Petros

    1991-11-01

    This paper introduces the class of thresholded min-max functions and studies their learning under the probably approximately correct (PAC) model introduced by Valiant. These functions can be used as pattern classifiers of both real-valued and binary-valued feature vectors. They are a lattice-theoretic generalization of Boolean functions and are also related to three-layer perceptrons and morphological signal operators. Several subclasses of the thresholded min- max functions are shown to be learnable under the PAC model.

  5. Sensitivity of Satellite-Based Skin Temperature to Different Surface Emissivity and NWP Reanalysis Sources Demonstrated Using a Single-Channel, Viewing-Angle-Corrected Retrieval Algorithm

    NASA Astrophysics Data System (ADS)

    Scarino, B. R.; Minnis, P.; Yost, C. R.; Chee, T.; Palikonda, R.

    2015-12-01

    Single-channel algorithms for satellite thermal-infrared- (TIR-) derived land and sea surface skin temperature (LST and SST) are advantageous in that they can be easily applied to a variety of satellite sensors. They can also accommodate decade-spanning instrument series, particularly for periods when split-window capabilities are not available. However, the benefit of one unified retrieval methodology for all sensors comes at the cost of critical sensitivity to surface emissivity (ɛs) and atmospheric transmittance estimation. It has been demonstrated that as little as 0.01 variance in ɛs can amount to more than a 0.5-K adjustment in retrieved LST values. Atmospheric transmittance requires calculations that employ vertical profiles of temperature and humidity from numerical weather prediction (NWP) models. Selection of a given NWP model can significantly affect LST and SST agreement relative to their respective validation sources. Thus, it is necessary to understand the accuracies of the retrievals for various NWP models to ensure the best LST/SST retrievals. The sensitivities of the single-channel retrievals to surface emittance and NWP profiles are investigated using NASA Langley historic land and ocean clear-sky skin temperature (Ts) values derived from high-resolution 11-μm TIR brightness temperature measured from geostationary satellites (GEOSat) and Advanced Very High Resolution Radiometers (AVHRR). It is shown that mean GEOSat-derived, anisotropy-corrected LST can vary by up to ±0.8 K depending on whether CERES or MODIS ɛs sources are used. Furthermore, the use of either NOAA Global Forecast System (GFS) or NASA Goddard Modern-Era Retrospective Analysis for Research and Applications (MERRA) for the radiative transfer model initial atmospheric state can account for more than 0.5-K variation in mean Ts. The results are compared to measurements from the Surface Radiation Budget Network (SURFRAD), an Atmospheric Radiation Measurement (ARM) Program ground

  6. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  7. Contrast image correction method

    NASA Astrophysics Data System (ADS)

    Schettini, Raimondo; Gasparini, Francesca; Corchs, Silvia; Marini, Fabrizio; Capra, Alessandro; Castorina, Alfio

    2010-04-01

    A method for contrast enhancement is proposed. The algorithm is based on a local and image-dependent exponential correction. The technique aims to correct images that simultaneously present overexposed and underexposed regions. To prevent halo artifacts, the bilateral filter is used as the mask of the exponential correction. Depending on the characteristics of the image (piloted by histogram analysis), an automated parameter-tuning step is introduced, followed by stretching, clipping, and saturation preserving treatments. Comparisons with other contrast enhancement techniques are presented. The Mean Opinion Score (MOS) experiment on grayscale images gives the greatest preference score for our algorithm.

  8. Remote Sensing Data Binary Classification Using Boosting with Simple Classifiers

    NASA Astrophysics Data System (ADS)

    Nowakowski, Artur

    2015-10-01

    Boosting is a classification method which has been proven useful in non-satellite image processing while it is still new to satellite remote sensing. It is a meta-algorithm, which builds a strong classifier from many weak ones in iterative way. We adapt the AdaBoost.M1 boosting algorithm in a new land cover classification scenario based on utilization of very simple threshold classifiers employing spectral and contextual information. Thresholds for the classifiers are automatically calculated adaptively to data statistics. The proposed method is employed for the exemplary problem of artificial area identification. Classification of IKONOS multispectral data results in short computational time and overall accuracy of 94.4% comparing to 94.0% obtained by using AdaBoost.M1 with trees and 93.8% achieved using Random Forest. The influence of a manipulation of the final threshold of the strong classifier on classification results is reported.

  9. The Effects of Observation of Learn Units during Reinforcement and Correction Conditions on the Rate of Learning Math Algorithms by Fifth Grade Students

    ERIC Educational Resources Information Center

    Neu, Jessica Adele

    2013-01-01

    I conducted two studies on the comparative effects of the observation of learn units during (a) reinforcement or (b) correction conditions on the acquisition of math objectives. The dependent variables were the within-session cumulative numbers of correct responses emitted during observational sessions. The independent variables were the…

  10. DECISION TREE CLASSIFIERS FOR STAR/GALAXY SEPARATION

    SciTech Connect

    Vasconcellos, E. C.; Ruiz, R. S. R.; De Carvalho, R. R.; Capelato, H. V.; Gal, R. R.; LaBarbera, F. L.; Frago Campos Velho, H.; Trevisan, M.

    2011-06-15

    We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS-DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: 14 {<=} r {<=} 21 (85.2%) and r {>=} 19 (82.1%). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT, and Ball et al. We find that our FT classifier is comparable to or better in completeness over the full magnitude range 15 {<=} r {<=} 21, with much lower contamination than all but the Ball et al. classifier. At the faintest magnitudes (r > 19), our classifier is the only one that maintains high completeness (>80%) while simultaneously achieving low contamination ({approx}2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 {<=} r {<=} 21.

  11. Pattern classifier for health monitoring of helicopter gearboxes

    NASA Technical Reports Server (NTRS)

    Chin, Hsinyung; Danai, Kourosh; Lewicki, David G.

    1993-01-01

    The application of a newly developed diagnostic method to a helicopter gearbox is demonstrated. This method is a pattern classifier which uses a multi-valued influence matrix (MVIM) as its diagnostic model. The method benefits from a fast learning algorithm, based on error feedback, that enables it to estimate gearbox health from a small set of measurement-fault data. The MVIM method can also assess the diagnosability of the system and variability of the fault signatures as the basis to improve fault signatures. This method was tested on vibration signals reflecting various faults in an OH-58A main rotor transmission gearbox. The vibration signals were then digitized and processed by a vibration signal analyzer to enhance and extract various features of the vibration data. The parameters obtained from this analyzer were utilized to train and test the performance of the MVIM method in both detection and diagnosis. The results indicate that the MVIM method provided excellent detection results when the full range of faults effects on the measurements were included in training, and it had a correct diagnostic rate of 95 percent when the faults were included in training.

  12. The Challenge of Classifying Polyhedra.

    ERIC Educational Resources Information Center

    Pedersen, Jean J.

    1980-01-01

    A question posed by Euler is considered: How can polyhedra be classified so that the results is in some way analogous to the simple classification of polygons according to the number of their sides? (MK)

  13. Classifying Multi-year Land Use and Land Cover using Deep Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Seo, B.

    2015-12-01

    Cultivated ecosystems constitute a particularly frequent form of human land use. Long-term management of a cultivated ecosystem requires us to know temporal change of land use and land cover (LULC) of the target system. Land use and land cover changes (LUCC) in agricultural ecosystem is often rapid and unexpectedly occurs. Thus, longitudinal LULC is particularly needed to examine trends of ecosystem functions and ecosystem services of the target system. Multi-temporal classification of land use and land cover (LULC) in complex heterogeneous landscape remains a challenge. Agricultural landscapes often made up of a mosaic of numerous LULC classes, thus spatial heterogeneity is large. Moreover, temporal and spatial variation within a LULC class is also large. Under such a circumstance, standard classifiers would fail to identify the LULC classes correctly due to the heterogeneity of the target LULC classes. Because most standard classifiers search for a specific pattern of features for a class, they fail to detect classes with noisy and/or transformed feature data sets. Recently, deep learning algorithms have emerged in the machine learning communities and shown superior performance on a variety of tasks, including image classification and object recognition. In this paper, we propose to use convolutional neural networks (CNN) to learn from multi-spectral data to classify agricultural LULC types. Based on multi-spectral satellite data, we attempted to classify agricultural LULC classes in Soyang watershed, South Korea for the three years' study period (2009-2011). The classification performance of support vector machine (SVM) and CNN classifiers were compared for different years. Preliminary results demonstrate that the proposed method can improve classification performance compared to the SVM classifier. The SVM classifier failed to identify classes when trained on a year to predict another year, whilst CNN could reconstruct LULC maps of the catchment over the study

  14. Innovative use of DSP technology in space: FORTE event classifier

    SciTech Connect

    Briles, S.; Moore, K. Jones, R.; Klingner, P.; Neagley, D.; Caffrey, M.; Henneke, K.; Spurgen, W.; Blain, P.

    1994-08-01

    The Fast On-Orbit Recording of Transient Events (FORTE) small satellite will field a digital signal processor (DSP) experiment for the purpose of classifying radio-frequency (rf) transient signals propagating through the earth`s ionosphere. Designated the Event Classifier experiment, this DSP experiment uses a single Texas Instruments` SMJ320C30 DSP to execute preprocessing, feature extraction, and classification algorithms on down-converted, digitized, and buffered rf transient signals in the frequency range of 30 to 300 MHz. A radiation-hardened microcontroller monitors DSP- abnormalities and supervises spacecraft command communications. On- orbit evaluation of multiple algorithms is supported by the Event Classifier architecture. Ground-based commands determine the subset and sequence of algorithms executed to classify a captured time series. Conventional neural network classification algorithms will be some of the classification techniques implemented on-board FORTE while in a low-earth orbit. Results of all experiments, after being stored in DSP flash memory, will be transmitted through the spacecraft to ground stations. The Event Classifier is a versatile and fault-tolerant experiment that is an important new space-based application of DSP technology.

  15. IAEA safeguards and classified materials

    SciTech Connect

    Pilat, J.F.; Eccleston, G.W.; Fearey, B.L.; Nicholas, N.J.; Tape, J.W.; Kratzer, M.

    1997-11-01

    The international community in the post-Cold War period has suggested that the International Atomic Energy Agency (IAEA) utilize its expertise in support of the arms control and disarmament process in unprecedented ways. The pledges of the US and Russian presidents to place excess defense materials, some of which are classified, under some type of international inspections raises the prospect of using IAEA safeguards approaches for monitoring classified materials. A traditional safeguards approach, based on nuclear material accountancy, would seem unavoidably to reveal classified information. However, further analysis of the IAEA`s safeguards approaches is warranted in order to understand fully the scope and nature of any problems. The issues are complex and difficult, and it is expected that common technical understandings will be essential for their resolution. Accordingly, this paper examines and compares traditional safeguards item accounting of fuel at a nuclear power station (especially spent fuel) with the challenges presented by inspections of classified materials. This analysis is intended to delineate more clearly the problems as well as reveal possible approaches, techniques, and technologies that could allow the adaptation of safeguards to the unprecedented task of inspecting classified materials. It is also hoped that a discussion of these issues can advance ongoing political-technical debates on international inspections of excess classified materials.

  16. Moving Away From Error-Related Potentials to Achieve Spelling Correction in P300 Spellers

    PubMed Central

    Mainsah, Boyla O.; Morton, Kenneth D.; Collins, Leslie M.; Sellers, Eric W.; Throckmorton, Chandra S.

    2016-01-01

    P300 spellers can provide a means of communication for individuals with severe neuromuscular limitations. However, its use as an effective communication tool is reliant on high P300 classification accuracies (>70%) to account for error revisions. Error-related potentials (ErrP), which are changes in EEG potentials when a person is aware of or perceives erroneous behavior or feedback, have been proposed as inputs to drive corrective mechanisms that veto erroneous actions by BCI systems. The goal of this study is to demonstrate that training an additional ErrP classifier for a P300 speller is not necessary, as we hypothesize that error information is encoded in the P300 classifier responses used for character selection. We perform offline simulations of P300 spelling to compare ErrP and non-ErrP based corrective algorithms. A simple dictionary correction based on string matching and word frequency significantly improved accuracy (35–185%), in contrast to an ErrP-based method that flagged, deleted and replaced erroneous characters (−47 – 0%). Providing additional information about the likelihood of characters to a dictionary-based correction further improves accuracy. Our Bayesian dictionary-based correction algorithm that utilizes P300 classifier confidences performed comparably (44–416%) to an oracle ErrP dictionary-based method that assumed perfect ErrP classification (43–433%). PMID:25438320

  17. Political Correctness--Correct?

    ERIC Educational Resources Information Center

    Boase, Paul H.

    1993-01-01

    Examines the phenomenon of political correctness, its roots and objectives, and its successes and failures in coping with the conflicts and clashes of multicultural campuses. Argues that speech codes indicate failure in academia's primary mission to civilize and educate through talk, discussion, thought,166 and persuasion. (SR)

  18. Bayes classifiers for imbalanced traffic accidents datasets.

    PubMed

    Mujalli, Randa Oqab; López, Griselda; Garach, Laura

    2016-03-01

    Traffic accidents data sets are usually imbalanced, where the number of instances classified under the killed or severe injuries class (minority) is much lower than those classified under the slight injuries class (majority). This, however, supposes a challenging problem for classification algorithms and may cause obtaining a model that well cover the slight injuries instances whereas the killed or severe injuries instances are misclassified frequently. Based on traffic accidents data collected on urban and suburban roads in Jordan for three years (2009-2011); three different data balancing techniques were used: under-sampling which removes some instances of the majority class, oversampling which creates new instances of the minority class and a mix technique that combines both. In addition, different Bayes classifiers were compared for the different imbalanced and balanced data sets: Averaged One-Dependence Estimators, Weightily Average One-Dependence Estimators, and Bayesian networks in order to identify factors that affect the severity of an accident. The results indicated that using the balanced data sets, especially those created using oversampling techniques, with Bayesian networks improved classifying a traffic accident according to its severity and reduced the misclassification of killed and severe injuries instances. On the other hand, the following variables were found to contribute to the occurrence of a killed causality or a severe injury in a traffic accident: number of vehicles involved, accident pattern, number of directions, accident type, lighting, surface condition, and speed limit. This work, to the knowledge of the authors, is the first that aims at analyzing historical data records for traffic accidents occurring in Jordan and the first to apply balancing techniques to analyze injury severity of traffic accidents.

  19. Building classifiers using Bayesian networks

    SciTech Connect

    Friedman, N.; Goldszmidt, M.

    1996-12-31

    Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state of the art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we examine and evaluate approaches for inducing classifiers from data, based on recent results in the theory of learning Bayesian networks. Bayesian networks are factored representations of probability distributions that generalize the naive Bayes classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness which are characteristic of naive Bayes. We experimentally tested these approaches using benchmark problems from the U. C. Irvine repository, and compared them against C4.5, naive Bayes, and wrapper-based feature selection methods.

  20. INSENS classification algorithm report

    SciTech Connect

    Hernandez, J.E.; Frerking, C.J.; Myers, D.W.

    1993-07-28

    This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.

  1. Noise-robust superresolution based on a classified dictionary

    NASA Astrophysics Data System (ADS)

    Jeong, Shin-Cheol; Song, Byung Cheol

    2010-10-01

    Conventional learning-based superresolution algorithms tend to boost noise components existing in input images because the algorithms are usually learned in a noise-free environment. Even though a specific noise reduction algorithm is applied to noisy images prior to superresolution, visual quality degradation is inevitable due to the mismatch between noise-free images and denoised images. Accordingly, we present a noise-robust superresolution algorithm that overcomes this problem. In the learning phase, a dictionary classified according to noise level is constructed, and then a high-resolution image is synthesized using the dictionary in the inference phase. Experimental results show that the proposed algorithm outperforms existing algorithms for various noisy images.

  2. How Do Children Classify Objects?

    ERIC Educational Resources Information Center

    George, Kenneth D.; Dietz, Maureen A.

    1971-01-01

    Except for grade one students, urban and suburban students used similar properties to classify illustrations of bottles containing different amounts of colored liquids. Only in the urban children was there a change in type of property used between grades one and three. (AL)

  3. A headband for classifying human postures.

    PubMed

    Aloqlah, Mohammed; Lahiji, Rosa R; Loparo, Kenneth A; Mehregany, Mehran

    2010-01-01

    a real-time method using only accelerometer data is developed for classifying basic human static postures, namely sitting, standing, and lying, as well as dynamic transitions between them. The algorithm uses discrete wavelet transform (DWT) in combination with a fuzzy logic inference system (FIS). Data from a single three-axis accelerometer integrated into a wearable headband is transmitted wirelessly, collected and analyzed in real time on a laptop computer, to extract two sets of features for posture classification. The received acceleration signals are decomposed using the DWT to extract the dynamic features; changes in the smoothness of the signal that reflect a transition between postures are detected at finer DWT scales. FIS then uses the previous posture transition and DWT-extracted features to determine the static postures. PMID:21097190

  4. A Systematic Comparison of Supervised Classifiers

    PubMed Central

    Amancio, Diego Raphael; Comin, Cesar Henrique; Casanova, Dalcimar; Travieso, Gonzalo; Bruno, Odemir Martinez; Rodrigues, Francisco Aparecido; da Fontoura Costa, Luciano

    2014-01-01

    Pattern recognition has been employed in a myriad of industrial, commercial and academic applications. Many techniques have been devised to tackle such a diversity of applications. Despite the long tradition of pattern recognition research, there is no technique that yields the best classification in all scenarios. Therefore, as many techniques as possible should be considered in high accuracy applications. Typical related works either focus on the performance of a given algorithm or compare various classification methods. In many occasions, however, researchers who are not experts in the field of machine learning have to deal with practical classification tasks without an in-depth knowledge about the underlying parameters. Actually, the adequate choice of classifiers and parameters in such practical circumstances constitutes a long-standing problem and is one of the subjects of the current paper. We carried out a performance study of nine well-known classifiers implemented in the Weka framework and compared the influence of the parameter configurations on the accuracy. The default configuration of parameters in Weka was found to provide near optimal performance for most cases, not including methods such as the support vector machine (SVM). In addition, the k-nearest neighbor method frequently allowed the best accuracy. In certain conditions, it was possible to improve the quality of SVM by more than 20% with respect to their default parameter configuration. PMID:24763312

  5. Objectively classifying Southern Hemisphere extratropical cyclones

    NASA Astrophysics Data System (ADS)

    Catto, Jennifer

    2016-04-01

    There has been a long tradition in attempting to separate extratropical cyclones into different classes depending on their cloud signatures, airflows, synoptic precursors, or upper-level flow features. Depending on these features, the cyclones may have different impacts, for example in their precipitation intensity. It is important, therefore, to understand how the distribution of different cyclone classes may change in the future. Many of the previous classifications have been performed manually. In order to be able to evaluate climate models and understand how extratropical cyclones might change in the future, we need to be able to use an automated method to classify cyclones. Extratropical cyclones have been identified in the Southern Hemisphere from the ERA-Interim reanalysis dataset with a commonly used identification and tracking algorithm that employs 850 hPa relative vorticity. A clustering method applied to large-scale fields from ERA-Interim at the time of cyclone genesis (when the cyclone is first detected), has been used to objectively classify identified cyclones. The results are compared to the manual classification of Sinclair and Revell (2000) and the four objectively identified classes shown in this presentation are found to match well. The relative importance of diabatic heating in the clusters is investigated, as well as the differing precipitation characteristics. The success of the objective classification shows its utility in climate model evaluation and climate change studies.

  6. RECIPES FOR WRITING ALGORITHMS FOR ATMOSPHERIC CORRECTIONS AND TEMPERATURE/EMISSIVITY SEPARATIONS IN THE THERMAL REGIME FOR A MULTI-SPECTRAL SENSOR

    SciTech Connect

    C. BOREL; W. CLODIUS

    2001-04-01

    This paper discusses the algorithms created for the Multi-spectral Thermal Imager (MTI) to retrieve temperatures and emissivities. Recipes to create the physics based water temperature retrieval, emissivity of water surfaces are described. A simple radiative transfer model for multi-spectral sensors is developed. A method to create look-up-tables and the criterion of finding the optimum water temperature are covered. Practical aspects such as conversion from band-averaged radiances to brightness temperatures and effects of variations in the spectral response on the atmospheric transmission are discussed. A recipe for a temperature/emissivity separation algorithm when water surfaces are present is given. Results of retrievals of skin water temperatures are compared with in-situ measurements of the bulk water temperature at two locations are shown.

  7. 76 FR 34761 - Classified National Security Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-14

    ... Classified National Security Information AGENCY: Marine Mammal Commission. ACTION: Notice. SUMMARY: This... information, as directed by Information Security Oversight Office regulations. FOR FURTHER INFORMATION CONTACT..., ``Classified National Security Information,'' and 32 CFR part 2001, ``Classified National Security...

  8. Chinese Subjective Sentence Extraction Based on Dictionary and Combination Classifiers

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Zhou, Yanquan; Wang, Xin

    For extracting of Chinese subjective sentence, this paper proposes a new dictionary-based extraction method and a novel classifier combination strategy. For the first method, we use the training data to score the subjective dictionary, which was composed of indicative verb, indicative adverbs, sentiment words, interjection and punctuation. Then we use the dictionary to score the test data, and filter the sentences by setting a reasonable threshold. New classifier combination strategies base on the maximum error correction capability. To enhance the accuracy, the method improves the traditional single error correction and achieves the dual error correction both in positive and negative classes. Experimental results show that the two methods are effective .And the final results show that the combination of two ways achieves a satisfactory subjective sentence extraction performance.

  9. Integrated One-Against-One Classifiers as Tools for Virtual Screening of Compound Databases: A Case Study with CNS Inhibitors.

    PubMed

    Jalali-Heravi, Mehdi; Mani-Varnosfaderani, Ahmad; Valadkhani, Abolfazl

    2013-08-01

    A total of 21 833 inhibitors of the central nervous system (CNS) were collected from Binding-database and analyzed using discriminant analysis (DA) techniques. A combination of genetic algorithm and quadratic discriminant analysis (GA-QDA) was proposed as a tool for the classification of molecules based on their therapeutic targets and activities. The results indicated that the one-against-one (OAO) QDA classifiers correctly separate the molecules based on their therapeutic targets and are comparable with support vector machines. These classifiers help in charting the chemical space of the CNS inhibitors and finding specific subspaces occupied by particular classes of molecules. As a next step, the classification models were used as virtual filters for screening of random subsets of PUBCHEM and ZINC databases. The calculated enrichment factors together with the area under curve values of receiver operating characteristic curves showed that these classifiers are good candidates to speed up the early stages of drug discovery projects. The "relative distances" of the center of active classes of biosimilar molecules calculated by OAO classifiers were used as indices for sorting the compound databases. The results revealed that, the multiclass classification models in this work circumvent the definition inactive sets for virtual screening and are useful for compound retrieval analysis in Chemoinformatics. PMID:27480066

  10. Classifying seismic waveforms from scratch: a case study in the alpine environment

    NASA Astrophysics Data System (ADS)

    Hammer, C.; Ohrnberger, M.; Fäh, D.

    2013-01-01

    Nowadays, an increasing amount of seismic data is collected by daily observatory routines. The basic step for successfully analyzing those data is the correct detection of various event types. However, the visually scanning process is a time-consuming task. Applying standard techniques for detection like the STA/LTA trigger still requires the manual control for classification. Here, we present a useful alternative. The incoming data stream is scanned automatically for events of interest. A stochastic classifier, called hidden Markov model, is learned for each class of interest enabling the recognition of highly variable waveforms. In contrast to other automatic techniques as neural networks or support vector machines the algorithm allows to start the classification from scratch as soon as interesting events are identified. Neither the tedious process of collecting training samples nor a time-consuming configuration of the classifier is required. An approach originally introduced for the volcanic task force action allows to learn classifier properties from a single waveform example and some hours of background recording. Besides a reduction of required workload this also enables to detect very rare events. Especially the latter feature provides a milestone point for the use of seismic devices in alpine warning systems. Furthermore, the system offers the opportunity to flag new signal classes that have not been defined before. We demonstrate the application of the classification system using a data set from the Swiss Seismological Survey achieving very high recognition rates. In detail we document all refinements of the classifier providing a step-by-step guide for the fast set up of a well-working classification system.

  11. RFMirTarget: Predicting Human MicroRNA Target Genes with a Random Forest Classifier

    PubMed Central

    Mendoza, Mariana R.; da Fonseca, Guilherme C.; Loss-Morais, Guilherme; Alves, Ronnie; Margis, Rogerio; Bazzan, Ana L. C.

    2013-01-01

    MicroRNAs are key regulators of eukaryotic gene expression whose fundamental role has already been identified in many cell pathways. The correct identification of miRNAs targets is still a major challenge in bioinformatics and has motivated the development of several computational methods to overcome inherent limitations of experimental analysis. Indeed, the best results reported so far in terms of specificity and sensitivity are associated to machine learning-based methods for microRNA-target prediction. Following this trend, in the current paper we discuss and explore a microRNA-target prediction method based on a random forest classifier, namely RFMirTarget. Despite its well-known robustness regarding general classifying tasks, to the best of our knowledge, random forest have not been deeply explored for the specific context of predicting microRNAs targets. Our framework first analyzes alignments between candidate microRNA-target pairs and extracts a set of structural, thermodynamics, alignment, seed and position-based features, upon which classification is performed. Experiments have shown that RFMirTarget outperforms several well-known classifiers with statistical significance, and that its performance is not impaired by the class imbalance problem or features correlation. Moreover, comparing it against other algorithms for microRNA target prediction using independent test data sets from TarBase and starBase, we observe a very promising performance, with higher sensitivity in relation to other methods. Finally, tests performed with RFMirTarget show the benefits of feature selection even for a classifier with embedded feature importance analysis, and the consistency between relevant features identified and important biological properties for effective microRNA-target gene alignment. PMID:23922946

  12. A random forest classifier for lymph diseases.

    PubMed

    Azar, Ahmad Taher; Elshazly, Hanaa Ismail; Hassanien, Aboul Ella; Elkorany, Abeer Mohamed

    2014-02-01

    Machine learning-based classification techniques provide support for the decision-making process in many areas of health care, including diagnosis, prognosis, screening, etc. Feature selection (FS) is expected to improve classification performance, particularly in situations characterized by the high data dimensionality problem caused by relatively few training examples compared to a large number of measured features. In this paper, a random forest classifier (RFC) approach is proposed to diagnose lymph diseases. Focusing on feature selection, the first stage of the proposed system aims at constructing diverse feature selection algorithms such as genetic algorithm (GA), Principal Component Analysis (PCA), Relief-F, Fisher, Sequential Forward Floating Search (SFFS) and the Sequential Backward Floating Search (SBFS) for reducing the dimension of lymph diseases dataset. Switching from feature selection to model construction, in the second stage, the obtained feature subsets are fed into the RFC for efficient classification. It was observed that GA-RFC achieved the highest classification accuracy of 92.2%. The dimension of input feature space is reduced from eighteen to six features by using GA. PMID:24290902

  13. Robust Algorithm for Systematic Classification of Malaria Late Treatment Failures as Recrudescence or Reinfection Using Microsatellite Genotyping.

    PubMed

    Plucinski, Mateusz M; Morton, Lindsay; Bushman, Mary; Dimbu, Pedro Rafael; Udhayakumar, Venkatachalam

    2015-10-01

    Routine therapeutic efficacy monitoring to measure the response to antimalarial treatment is a cornerstone of malaria control. To correctly measure drug efficacy, therapeutic efficacy studies require genotyping parasites from late treatment failures to differentiate between recrudescent infections and reinfections. However, there is a lack of statistical methods to systematically classify late treatment failures from genotyping data. A Bayesian algorithm was developed to estimate the posterior probability of late treatment failure being the result of a recrudescent infection from microsatellite genotyping data. The algorithm was implemented using a Monte Carlo Markov chain approach and was used to classify late treatment failures using published microsatellite data from therapeutic efficacy studies in Ethiopia and Angola. The algorithm classified 85% of the Ethiopian and 95% of the Angolan late treatment failures as either likely reinfection or likely recrudescence, defined as a posterior probability of recrudescence of <0.1 or >0.9, respectively. The adjusted efficacies calculated using the new algorithm differed from efficacies estimated using commonly used methods for differentiating recrudescence from reinfection. In a high-transmission setting such as Angola, as few as 15 samples needed to be genotyped in order to have enough power to correctly classify treatment failures. Analysis of microsatellite genotyping data for differentiating between recrudescence and reinfection benefits from an approach that both systematically classifies late treatment failures and estimates the uncertainty of these classifications. Researchers analyzing genotyping data from antimalarial therapeutic efficacy monitoring are urged to publish their raw genetic data and to estimate the uncertainty around their classification. PMID:26195521

  14. Time and space optimization of document content classifiers

    NASA Astrophysics Data System (ADS)

    Yin, Dawei; Baird, Henry S.; An, Chang

    2010-01-01

    Scaling up document-image classifiers to handle an unlimited variety of document and image types poses serious challenges to conventional trainable classifier technologies. Highly versatile classifiers demand representative training sets which can be dauntingly large: in investigating document content extraction systems, we have demonstrated the advantages of employing as many as a billion training samples in approximate k-nearest neighbor (kNN) classifiers sped up using hashed K-d trees. We report here on an algorithm, which we call online bin-decimation, for coping with training sets that are too big to fit in main memory, and we show empirically that it is superior to offline pre-decimation, which simply discards a large fraction of the training samples at random before constructing the classifier. The key idea of bin-decimation is to enforce an upper bound approximately on the number of training samples stored in each K-d hash bin; an adaptive statistical technique allows this to be accomplished online and in linear time, while reading the training data exactly once. An experiment on 86.7M training samples reveals a 23-times speedup with less than 0.1% loss of accuracy (compared to pre-decimation); or, for another value of the upper bound, a 60-times speedup with less than 5% loss of accuracy. We also compare it to four other related algorithms.

  15. Atmospheric Correction of Ocean Color Imagery: Test of the Spectral Optimization Algorithm with the Sea-Viewing Wide Field-of-View Sensor.

    PubMed

    Chomko, R M; Gordon, H R

    2001-06-20

    We implemented the spectral optimization algorithm [SOA; Appl. Opt. 37, 5560 (1998)] in an image-processing environment and tested it with Sea-viewing Wide Field-of-View Sensor (SeaWiFS) imagery from the Middle Atlantic Bight and the Sargasso Sea. We compared the SOA and the standard SeaWiFS algorithm on two days that had significantly different atmospheric turbidities but, because of the location and time of the year, nearly the same water properties. The SOA-derived pigment concentration showed excellent continuity over the two days, with the relative difference in pigments exceeding 10% only in regions that are characteristic of high advection. The continuity in the derived water-leaving radiances at 443 and 555 nm was also within ~10%. There was no obvious correlation between the relative differences in pigments and the aerosol concentration. In contrast, standard processing showed poor continuity in derived pigments over the two days, with the relative differences correlating strongly with atmospheric turbidity. SOA-derived atmospheric parameters suggested that the retrieved ocean and atmospheric reflectances were decoupled on the more turbid day. On the clearer day, for which the aerosol concentration was so low that relatively large changes in aerosol properties resulted in only small changes in aerosol reflectance, water patterns were evident in the aerosol properties. This result implies that SOA-derived atmospheric parameters cannot be accurate in extremely clear atmospheres.

  16. Classifier-Guided Sampling for Complex Energy System Optimization

    SciTech Connect

    Backlund, Peter B.; Eddy, John P.

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  17. 28 CFR 701.14 - Classified information.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Classified information. 701.14 Section... UNDER THE FREEDOM OF INFORMATION ACT § 701.14 Classified information. In processing a request for information that is classified or classifiable under Executive Order 12356 or any other Executive...

  18. 28 CFR 701.14 - Classified information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Classified information. 701.14 Section... UNDER THE FREEDOM OF INFORMATION ACT § 701.14 Classified information. In processing a request for information that is classified or classifiable under Executive Order 12356 or any other Executive...

  19. SAR terrain classifier and mapper of biophysical attributes

    NASA Technical Reports Server (NTRS)

    Ulaby, Fawwaz T.; Dobson, M. Craig; Pierce, Leland; Sarabandi, Kamal

    1993-01-01

    In preparation for the launch of SIR-C/X-SAR and design studies for future orbital SAR, a program has made considerable progress in the development of an SAR terrain classifier and algorithms for quantification of biophysical attributes. The goal of this program is to produce a generalized software package for terrain classification and estimation of biophysical attributes and to make this package available to the larger scientific community. The basic elements of the SAR (Synthetic Aperture Radar) terrain classifier are outlined. An SAR image is calibrated with respect to known system and processor gains and external targets (if available). A Level 1 classifier operates on the data to differentiate: urban features, surfaces and tall and short vegetation. Level 2 classifiers further subdivide these classes on the basis of structure. Finally, biophysical and geophysical inversions are applied to each class to estimate attributes of interest. The process used to develop the classifiers and inversions is shown. Radar scattering models developed from theory and from empirical data obtained by truck-mounted polarimeters and the JPL AirSAR are validated. The validated models are used in sensitivity studies to understand the roles of various scattering sources (i.e., surface trunk, branches, etc.) in determining net backscatter. Model simulations of sigma (sup o) as functions of the wave parameters (lambda, polarization and angle of incidence) and the geophysical and biophysical attributes are used to develop robust classifiers. The classifiers are validated using available AirSAR data sets. Specific estimators are developed for each class on the basis of the scattering models and empirical data sets. The candidate algorithms are tested with the AirSAR data sets. The attributes of interest include: total above ground biomass, woody biomass, soil moisture and soil roughness.

  20. Fault tolerance of SVM algorithm for hyperspectral image

    NASA Astrophysics Data System (ADS)

    Cui, Yabo; Yuan, Zhengwu; Wu, Yuanfeng; Gao, Lianru; Zhang, Hao

    2015-10-01

    One of the most important tasks in analyzing hyperspectral image data is the classification process[1]. In general, in order to enhance the classification accuracy, a data preprocessing step is usually adopted to remove the noise in the data before classification. But for the time-sensitive applications, we hope that even the data contains noise the classifier can still appear to execute correctly from the user's perspective, such as risk prevention and response. As the most popular classifier, Support Vector Machine (SVM) has been widely used for hyperspectral image classification and proved to be a very promising technique in supervised classification[2]. In this paper, two experiments are performed to demonstrate that for the hyperspectral data with noise, if the noise of the data is within a certain range, SVM algorithm is still able to execute correctly from the user's perspective.

  1. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  2. Steganalysis in high dimensions: fusing classifiers built on random subspaces

    NASA Astrophysics Data System (ADS)

    Kodovský, Jan; Fridrich, Jessica

    2011-02-01

    By working with high-dimensional representations of covers, modern steganographic methods are capable of preserving a large number of complex dependencies among individual cover elements and thus avoid detection using current best steganalyzers. Inevitably, steganalysis needs to start using high-dimensional feature sets as well. This brings two key problems - construction of good high-dimensional features and machine learning that scales well with respect to dimensionality. Depending on the classifier, high dimensionality may lead to problems with the lack of training data, infeasibly high complexity of training, degradation of generalization abilities, lack of robustness to cover source, and saturation of performance below its potential. To address these problems collectively known as the curse of dimensionality, we propose ensemble classifiers as an alternative to the much more complex support vector machines. Based on the character of the media being analyzed, the steganalyst first puts together a high-dimensional set of diverse "prefeatures" selected to capture dependencies among individual cover elements. Then, a family of weak classifiers is built on random subspaces of the prefeature space. The final classifier is constructed by fusing the decisions of individual classifiers. The advantage of this approach is its universality, low complexity, simplicity, and improved performance when compared to classifiers trained on the entire prefeature set. Experiments with the steganographic algorithms nsF5 and HUGO demonstrate the usefulness of this approach over current state of the art.

  3. Correction of Facial Deformity in Sturge–Weber Syndrome

    PubMed Central

    Yamaguchi, Kazuaki; Lonic, Daniel; Chen, Chit

    2016-01-01

    Background: Although previous studies have reported soft-tissue management in surgical treatment of Sturge–Weber syndrome (SWS), there are few reports describing facial bone surgery in this patient group. The purpose of this study is to examine the validity of our multidisciplinary algorithm for correcting facial deformities associated with SWS. To the best of our knowledge, this is the first study on orthognathic surgery for SWS patients. Methods: A retrospective chart review included 2 SWS patients who completed the surgical treatment algorithm. Radiographic and clinical data were recorded, and a treatment algorithm was derived. Results: According to the Roach classification, the first patient was classified as type I presenting with both facial and leptomeningeal vascular anomalies without glaucoma and the second patient as type II presenting only with a hemifacial capillary malformation. Considering positive findings in seizure history and intracranial vascular anomalies in the first case, the anesthetic management was modified to omit hypotensive anesthesia because of the potential risk of intracranial pressure elevation. Primarily, both patients underwent 2-jaw orthognathic surgery and facial bone contouring including genioplasty, zygomatic reduction, buccal fat pad removal, and masseter reduction without major complications. In the second step, the volume and distribution of facial soft tissues were altered by surgical resection and reposition. Both patients were satisfied with the surgical result. Conclusions: Our multidisciplinary algorithm can systematically detect potential risk factors. Correction of the asymmetric face by successive bone and soft-tissue surgery enables the patients to reduce their psychosocial burden and increase their quality of life.

  4. Classifying sex biased congenital anomalies

    SciTech Connect

    Lubinsky, M.S.

    1997-03-31

    The reasons for sex biases in congenital anomalies that arise before structural or hormonal dimorphisms are established has long been unclear. A review of such disorders shows that patterning and tissue anomalies are female biased, and structural findings are more common in males. This suggests different gender dependent susceptibilities to developmental disturbances, with female vulnerabilities focused on early blastogenesis/determination, while males are more likely to involve later organogenesis/morphogenesis. A dual origin for some anomalies explains paradoxical reductions of sex biases with greater severity (i.e., multiple rather than single malformations), presumably as more severe events increase the involvement of an otherwise minor process with opposite biases to those of the primary mechanism. The cause for these sex differences is unknown, but early dimorphisms, such as differences in growth or presence of H-Y antigen, may be responsible. This model provides a useful rationale for understanding and classifying sex-biased congenital anomalies. 42 refs., 7 tabs.

  5. Application of bias correction methods to improve the accuracy of quantitative radar rainfall in Korea

    NASA Astrophysics Data System (ADS)

    Lee, J.-K.; Kim, J.-H.; Suk, M.-K.

    2015-04-01

    There are many potential sources of bias in the radar rainfall estimation process. This study classified the biases from the rainfall estimation process into the reflectivity measurement bias and QPE model bias and also conducted the bias correction methods to improve the accuracy of the Radar-AWS Rainrate (RAR) calculation system operated by the Korea Meteorological Administration (KMA). For the Z bias correction, this study utilized the bias correction algorithm for the reflectivity. The concept of this algorithm is that the reflectivity of target single-pol radars is corrected based on the reference dual-pol radar corrected in the hardware and software bias. This study, and then, dealt with two post-process methods, the Mean Field Bias Correction (MFBC) method and the Local Gauge Correction method (LGC), to correct rainfall-bias. The Z bias and rainfall-bias correction methods were applied to the RAR system. The accuracy of the RAR system improved after correcting Z bias. For rainfall types, although the accuracy of Changma front and local torrential cases was slightly improved without the Z bias correction, especially, the accuracy of typhoon cases got worse than existing results. As a result of the rainfall-bias correction, the accuracy of the RAR system performed Z bias_LGC was especially superior to the MFBC method because the different rainfall biases were applied to each grid rainfall amount in the LGC method. For rainfall types, Results of the Z bias_LGC showed that rainfall estimates for all types was more accurate than only the Z bias and, especially, outcomes in typhoon cases was vastly superior to the others.

  6. Application of bias correction methods to improve the accuracy of quantitative radar rainfall in Korea

    NASA Astrophysics Data System (ADS)

    Lee, J.-K.; Kim, J.-H.; Suk, M.-K.

    2015-11-01

    There are many potential sources of the biases in the radar rainfall estimation process. This study classified the biases from the rainfall estimation process into the reflectivity measurement bias and the rainfall estimation bias by the Quantitative Precipitation Estimation (QPE) model and also conducted the bias correction methods to improve the accuracy of the Radar-AWS Rainrate (RAR) calculation system operated by the Korea Meteorological Administration (KMA). In the Z bias correction for the reflectivity biases occurred by measuring the rainfalls, this study utilized the bias correction algorithm. The concept of this algorithm is that the reflectivity of the target single-pol radars is corrected based on the reference dual-pol radar corrected in the hardware and software bias. This study, and then, dealt with two post-process methods, the Mean Field Bias Correction (MFBC) method and the Local Gauge Correction method (LGC), to correct the rainfall estimation bias by the QPE model. The Z bias and rainfall estimation bias correction methods were applied to the RAR system. The accuracy of the RAR system was improved after correcting Z bias. For the rainfall types, although the accuracy of the Changma front and the local torrential cases was slightly improved without the Z bias correction the accuracy of the typhoon cases got worse than the existing results in particular. As a result of the rainfall estimation bias correction, the Z bias_LGC was especially superior to the MFBC method because the different rainfall biases were applied to each grid rainfall amount in the LGC method. For the rainfall types, the results of the Z bias_LGC showed that the rainfall estimates for all types was more accurate than only the Z bias and, especially, the outcomes in the typhoon cases was vastly superior to the others.

  7. Support vector machines classifiers of physical activities in preschoolers

    PubMed Central

    Zhao, Wei; Adolph, Anne L; Puyau, Maurice R; Vohra, Firoz A; Butte, Nancy F; Zakeri, Issa F

    2013-01-01

    The goal of this study is to develop, test, and compare multinomial logistic regression (MLR) and support vector machines (SVM) in classifying preschool-aged children physical activity data acquired from an accelerometer. In this study, 69 children aged 3–5 years old were asked to participate in a supervised protocol of physical activities while wearing a triaxial accelerometer. Accelerometer counts, steps, and position were obtained from the device. We applied K-means clustering to determine the number of natural groupings presented by the data. We used MLR and SVM to classify the six activity types. Using direct observation as the criterion method, the 10-fold cross-validation (CV) error rate was used to compare MLR and SVM classifiers, with and without sleep. Altogether, 58 classification models based on combinations of the accelerometer output variables were developed. In general, the SVM classifiers have a smaller 10-fold CV error rate than their MLR counterparts. Including sleep, a SVM classifier provided the best performance with a 10-fold CV error rate of 24.70%. Without sleep, a SVM classifier-based triaxial accelerometer counts, vector magnitude, steps, position, and 1- and 2-min lag and lead values achieved a 10-fold CV error rate of 20.16% and an overall classification error rate of 15.56%. SVM supersedes the classical classifier MLR in categorizing physical activities in preschool-aged children. Using accelerometer data, SVM can be used to correctly classify physical activities typical of preschool-aged children with an acceptable classification error rate. PMID:24303099

  8. Comparing Different Classifiers in Sensory Motor Brain Computer Interfaces

    PubMed Central

    Bashashati, Hossein; Ward, Rabab K.; Birch, Gary E.; Bashashati, Ali

    2015-01-01

    A problem that impedes the progress in Brain-Computer Interface (BCI) research is the difficulty in reproducing the results of different papers. Comparing different algorithms at present is very difficult. Some improvements have been made by the use of standard datasets to evaluate different algorithms. However, the lack of a comparison framework still exists. In this paper, we construct a new general comparison framework to compare different algorithms on several standard datasets. All these datasets correspond to sensory motor BCIs, and are obtained from 21 subjects during their operation of synchronous BCIs and 8 subjects using self-paced BCIs. Other researchers can use our framework to compare their own algorithms on their own datasets. We have compared the performance of different popular classification algorithms over these 29 subjects and performed statistical tests to validate our results. Our findings suggest that, for a given subject, the choice of the classifier for a BCI system depends on the feature extraction method used in that BCI system. This is in contrary to most of publications in the field that have used Linear Discriminant Analysis (LDA) as the classifier of choice for BCI systems. PMID:26090799

  9. Comparing Different Classifiers in Sensory Motor Brain Computer Interfaces.

    PubMed

    Bashashati, Hossein; Ward, Rabab K; Birch, Gary E; Bashashati, Ali

    2015-01-01

    A problem that impedes the progress in Brain-Computer Interface (BCI) research is the difficulty in reproducing the results of different papers. Comparing different algorithms at present is very difficult. Some improvements have been made by the use of standard datasets to evaluate different algorithms. However, the lack of a comparison framework still exists. In this paper, we construct a new general comparison framework to compare different algorithms on several standard datasets. All these datasets correspond to sensory motor BCIs, and are obtained from 21 subjects during their operation of synchronous BCIs and 8 subjects using self-paced BCIs. Other researchers can use our framework to compare their own algorithms on their own datasets. We have compared the performance of different popular classification algorithms over these 29 subjects and performed statistical tests to validate our results. Our findings suggest that, for a given subject, the choice of the classifier for a BCI system depends on the feature extraction method used in that BCI system. This is in contrary to most of publications in the field that have used Linear Discriminant Analysis (LDA) as the classifier of choice for BCI systems.

  10. Evaluation of LDA Ensembles Classifiers for Brain Computer Interface

    NASA Astrophysics Data System (ADS)

    Arjona, Cristian; Pentácolo, José; Gareis, Iván; Atum, Yanina; Gentiletti, Gerardo; Acevedo, Rubén; Rufiner, Leonardo

    2011-12-01

    The Brain Computer Interface (BCI) translates brain activity into computer commands. To increase the performance of the BCI, to decode the user intentions it is necessary to get better the feature extraction and classification techniques. In this article the performance of a three linear discriminant analysis (LDA) classifiers ensemble is studied. The system based on ensemble can theoretically achieved better classification results than the individual counterpart, regarding individual classifier generation algorithm and the procedures for combine their outputs. Classic algorithms based on ensembles such as bagging and boosting are discussed here. For the application on BCI, it was concluded that the generated results using ER and AUC as performance index do not give enough information to establish which configuration is better.

  11. Local feature saliency classifier for real-time intrusion monitoring

    NASA Astrophysics Data System (ADS)

    Buch, Norbert; Velastin, Sergio A.

    2014-07-01

    We propose a texture saliency classifier to detect people in a video frame by identifying salient texture regions. The image is classified into foreground and background in real time. No temporal image information is used during the classification. The system is used for the task of detecting people entering a sterile zone, which is a common scenario for visual surveillance. Testing is performed on the Imagery Library for Intelligent Detection Systems sterile zone benchmark dataset of the United Kingdom's Home Office. The basic classifier is extended by fusing its output with simple motion information, which significantly outperforms standard motion tracking. A lower detection time can be achieved by combining texture classification with Kalman filtering. The fusion approach running at 10 fps gives the highest result of F1=0.92 for the 24-h test dataset. The paper concludes with a detailed analysis of the computation time required for the different parts of the algorithm.

  12. An algorithm for temperature correcting substrate moisture measurements: aligning substrate moisture responses with environmental drivers in polytunnel-grown strawberry plants

    NASA Astrophysics Data System (ADS)

    Goodchild, Martin; Janes, Stuart; Jenkins, Malcolm; Nicholl, Chris; Kühn, Karl

    2015-04-01

    The aim of this work is to assess the use of temperature corrected substrate moisture data to improve the relationship between environmental drivers and the measurement of substrate moisture content in high porosity soil-free growing environments such as coir. Substrate moisture sensor data collected from strawberry plants grown in coir bags installed in a table-top system under a polytunnel illustrates the impact of temperature on capacitance-based moisture measurements. Substrate moisture measurements made in our coir arrangement possess the negative temperature coefficient of the permittivity of water where diurnal changes in moisture content oppose those of substrate temperature. The diurnal substrate temperature variation was seen to range from 7° C to 25° C resulting in a clearly observable temperature effect in substrate moisture content measurements during the 23 day test period. In the laboratory we measured the ML3 soil moisture sensor (ThetaProbe) response to temperature in Air, dry glass beads and water saturated glass beads and used a three-phase alpha (α) mixing model, also known as the Complex Refractive Index Model (CRIM), to derive the permittivity temperature coefficients for glass and water. We derived the α value and estimated the temperature coefficient for water - for sensors operating at 100MHz. Both results are good agreement with published data. By applying the CRIM equation with the temperature coefficients of glass and water the moisture temperature coefficient of saturated glass beads has been reduced by more than an order of magnitude to a moisture temperature coefficient of

  13. Multiple classifier system for remote sensing image classification: a review.

    PubMed

    Du, Peijun; Xia, Junshi; Zhang, Wei; Tan, Kun; Liu, Yi; Liu, Sicong

    2012-01-01

    Over the last two decades, multiple classifier system (MCS) or classifier ensemble has shown great potential to improve the accuracy and reliability of remote sensing image classification. Although there are lots of literatures covering the MCS approaches, there is a lack of a comprehensive literature review which presents an overall architecture of the basic principles and trends behind the design of remote sensing classifier ensemble. Therefore, in order to give a reference point for MCS approaches, this paper attempts to explicitly review the remote sensing implementations of MCS and proposes some modified approaches. The effectiveness of existing and improved algorithms are analyzed and evaluated by multi-source remotely sensed images, including high spatial resolution image (QuickBird), hyperspectral image (OMISII) and multi-spectral image (Landsat ETM+). Experimental results demonstrate that MCS can effectively improve the accuracy and stability of remote sensing image classification, and diversity measures play an active role for the combination of multiple classifiers. Furthermore, this survey provides a roadmap to guide future research, algorithm enhancement and facilitate knowledge accumulation of MCS in remote sensing community.

  14. Multiple Classifier System for Remote Sensing Image Classification: A Review

    PubMed Central

    Du, Peijun; Xia, Junshi; Zhang, Wei; Tan, Kun; Liu, Yi; Liu, Sicong

    2012-01-01

    Over the last two decades, multiple classifier system (MCS) or classifier ensemble has shown great potential to improve the accuracy and reliability of remote sensing image classification. Although there are lots of literatures covering the MCS approaches, there is a lack of a comprehensive literature review which presents an overall architecture of the basic principles and trends behind the design of remote sensing classifier ensemble. Therefore, in order to give a reference point for MCS approaches, this paper attempts to explicitly review the remote sensing implementations of MCS and proposes some modified approaches. The effectiveness of existing and improved algorithms are analyzed and evaluated by multi-source remotely sensed images, including high spatial resolution image (QuickBird), hyperspectral image (OMISII) and multi-spectral image (Landsat ETM+). Experimental results demonstrate that MCS can effectively improve the accuracy and stability of remote sensing image classification, and diversity measures play an active role for the combination of multiple classifiers. Furthermore, this survey provides a roadmap to guide future research, algorithm enhancement and facilitate knowledge accumulation of MCS in remote sensing community. PMID:22666057

  15. Comparison of artificial intelligence classifiers for SIP attack data

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Slachta, Jiri

    2016-05-01

    Honeypot application is a source of valuable data about attacks on the network. We run several SIP honeypots in various computer networks, which are separated geographically and logically. Each honeypot runs on public IP address and uses standard SIP PBX ports. All information gathered via honeypot is periodically sent to the centralized server. This server classifies all attack data by neural network algorithm. The paper describes optimizations of a neural network classifier, which lower the classification error. The article contains the comparison of two neural network algorithm used for the classification of validation data. The first is the original implementation of the neural network described in recent work; the second neural network uses further optimizations like input normalization or cross-entropy cost function. We also use other implementations of neural networks and machine learning classification algorithms. The comparison test their capabilities on validation data to find the optimal classifier. The article result shows promise for further development of an accurate SIP attack classification engine.

  16. Accurate determination of imaging modality using an ensemble of text- and image-based classifiers.

    PubMed

    Kahn, Charles E; Kalpathy-Cramer, Jayashree; Lam, Cesar A; Eldredge, Christina E

    2012-02-01

    Imaging modality can aid retrieval of medical images for clinical practice, research, and education. We evaluated whether an ensemble classifier could outperform its constituent individual classifiers in determining the modality of figures from radiology journals. Seventeen automated classifiers analyzed 77,495 images from two radiology journals. Each classifier assigned one of eight imaging modalities--computed tomography, graphic, magnetic resonance imaging, nuclear medicine, positron emission tomography, photograph, ultrasound, or radiograph-to each image based on visual and/or textual information. Three physicians determined the modality of 5,000 randomly selected images as a reference standard. A "Simple Vote" ensemble classifier assigned each image to the modality that received the greatest number of individual classifiers' votes. A "Weighted Vote" classifier weighted each individual classifier's vote based on performance over a training set. For each image, this classifier's output was the imaging modality that received the greatest weighted vote score. We measured precision, recall, and F score (the harmonic mean of precision and recall) for each classifier. Individual classifiers' F scores ranged from 0.184 to 0.892. The simple vote and weighted vote classifiers correctly assigned 4,565 images (F score, 0.913; 95% confidence interval, 0.905-0.921) and 4,672 images (F score, 0.934; 95% confidence interval, 0.927-0.941), respectively. The weighted vote classifier performed significantly better than all individual classifiers. An ensemble classifier correctly determined the imaging modality of 93% of figures in our sample. The imaging modality of figures published in radiology journals can be determined with high accuracy, which will improve systems for image retrieval.

  17. Monocular precrash vehicle detection: features and classifiers.

    PubMed

    Sun, Zehang; Bebis, George; Miller, Ronald

    2006-07-01

    Robust and reliable vehicle detection from images acquired by a moving vehicle (i.e., on-road vehicle detection) is an important problem with applications to driver assistance systems and autonomous, self-guided vehicles. The focus of this work is on the issues of feature extraction and classification for rear-view vehicle detection. Specifically, by treating the problem of vehicle detection as a two-class classification problem, we have investigated several different feature extraction methods such as principal component analysis, wavelets, and Gabor filters. To evaluate the extracted features, we have experimented with two popular classifiers, neural networks and support vector machines (SVMs). Based on our evaluation results, we have developed an on-board real-time monocular vehicle detection system that is capable of acquiring grey-scale images, using Ford's proprietary low-light camera, achieving an average detection rate of 10 Hz. Our vehicle detection algorithm consists of two main steps: a multiscale driven hypothesis generation step and an appearance-based hypothesis verification step. During the hypothesis generation step, image locations where vehicles might be present are extracted. This step uses multiscale techniques not only to speed up detection, but also to improve system robustness. The appearance-based hypothesis verification step verifies the hypotheses using Gabor features and SVMs. The system has been tested in Ford's concept vehicle under different traffic conditions (e.g., structured highway, complex urban streets, and varying weather conditions), illustrating good performance. PMID:16830921

  18. Monocular precrash vehicle detection: features and classifiers.

    PubMed

    Sun, Zehang; Bebis, George; Miller, Ronald

    2006-07-01

    Robust and reliable vehicle detection from images acquired by a moving vehicle (i.e., on-road vehicle detection) is an important problem with applications to driver assistance systems and autonomous, self-guided vehicles. The focus of this work is on the issues of feature extraction and classification for rear-view vehicle detection. Specifically, by treating the problem of vehicle detection as a two-class classification problem, we have investigated several different feature extraction methods such as principal component analysis, wavelets, and Gabor filters. To evaluate the extracted features, we have experimented with two popular classifiers, neural networks and support vector machines (SVMs). Based on our evaluation results, we have developed an on-board real-time monocular vehicle detection system that is capable of acquiring grey-scale images, using Ford's proprietary low-light camera, achieving an average detection rate of 10 Hz. Our vehicle detection algorithm consists of two main steps: a multiscale driven hypothesis generation step and an appearance-based hypothesis verification step. During the hypothesis generation step, image locations where vehicles might be present are extracted. This step uses multiscale techniques not only to speed up detection, but also to improve system robustness. The appearance-based hypothesis verification step verifies the hypotheses using Gabor features and SVMs. The system has been tested in Ford's concept vehicle under different traffic conditions (e.g., structured highway, complex urban streets, and varying weather conditions), illustrating good performance.

  19. Classifying adolescent attention-deficit/hyperactivity disorder (ADHD) based on functional and structural imaging.

    PubMed

    Iannaccone, Reto; Hauser, Tobias U; Ball, Juliane; Brandeis, Daniel; Walitza, Susanne; Brem, Silvia

    2015-10-01

    Attention-deficit/hyperactivity disorder (ADHD) is a common disabling psychiatric disorder associated with consistent deficits in error processing, inhibition and regionally decreased grey matter volumes. The diagnosis is based on clinical presentation, interviews and questionnaires, which are to some degree subjective and would benefit from verification through biomarkers. Here, pattern recognition of multiple discriminative functional and structural brain patterns was applied to classify adolescents with ADHD and controls. Functional activation features in a Flanker/NoGo task probing error processing and inhibition along with structural magnetic resonance imaging data served to predict group membership using support vector machines (SVMs). The SVM pattern recognition algorithm correctly classified 77.78% of the subjects with a sensitivity and specificity of 77.78% based on error processing. Predictive regions for controls were mainly detected in core areas for error processing and attention such as the medial and dorsolateral frontal areas reflecting deficient processing in ADHD (Hart et al., in Hum Brain Mapp 35:3083-3094, 2014), and overlapped with decreased activations in patients in conventional group comparisons. Regions more predictive for ADHD patients were identified in the posterior cingulate, temporal and occipital cortex. Interestingly despite pronounced univariate group differences in inhibition-related activation and grey matter volumes the corresponding classifiers failed or only yielded a poor discrimination. The present study corroborates the potential of task-related brain activation for classification shown in previous studies. It remains to be clarified whether error processing, which performed best here, also contributes to the discrimination of useful dimensions and subtypes, different psychiatric disorders, and prediction of treatment success across studies and sites.

  20. Evolution of a computer program for classifying protein segments as transmembrane domains using genetic programming.

    PubMed

    Koza, J R

    1994-01-01

    The recently-developed genetic programming paradigm is used to evolve a computer program to classify a given protein segment as being a transmembrane domain or non-transmembrane area of the protein. Genetic programming starts with a primordial ooze of randomly generated computer programs composed of available programmatic ingredients and then genetically breeds the population of programs using the Darwinian principle of survival of the fittest and an analog of the naturally occurring genetic operation of crossover (sexual recombination). Automatic function definition enables genetic programming to dynamically create subroutines dynamically during the run. Genetic programming is given a training set of differently-sized protein segments and their correct classification (but no biochemical knowledge, such as hydrophobicity values). Correlation is used as the fitness measure to drive the evolutionary process. The best genetically-evolved program achieves an out-of-sample correlation of 0.968 and an out-of-sample error rate of 1.6%. This error rate is better than that reported for four other algorithms reported at the First International Conference on Intelligent Systems for Molecular Biology. Our genetically evolved program is an instance of an algorithm discovered by an automated learning paradigm that is superior to that written by human investigators.

  1. 15 CFR 4.8 - Classified Information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information..., the information shall be reviewed to determine whether it should remain classified. Ordinarily...

  2. 14 CFR 1216.317 - Classified information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Classified information. 1216.317 Section 1216.317 Aeronautics and Space NATIONAL AERONAUTICS AND SPACE ADMINISTRATION ENVIRONMENTAL QUALITY... Classified information. Environmental assessments and impact statements which contain classified...

  3. 32 CFR 1602.8 - Classifying authority.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 6 2014-07-01 2014-07-01 false Classifying authority. 1602.8 Section 1602.8 National Defense Other Regulations Relating to National Defense SELECTIVE SERVICE SYSTEM DEFINITIONS § 1602.8 Classifying authority. The term classifying authority refers to any official or board who...

  4. 32 CFR 1602.8 - Classifying authority.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 6 2013-07-01 2013-07-01 false Classifying authority. 1602.8 Section 1602.8 National Defense Other Regulations Relating to National Defense SELECTIVE SERVICE SYSTEM DEFINITIONS § 1602.8 Classifying authority. The term classifying authority refers to any official or board who...

  5. 32 CFR 1602.8 - Classifying authority.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classifying authority. 1602.8 Section 1602.8 National Defense Other Regulations Relating to National Defense SELECTIVE SERVICE SYSTEM DEFINITIONS § 1602.8 Classifying authority. The term classifying authority refers to any official or board who...

  6. 32 CFR 1602.8 - Classifying authority.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 6 2012-07-01 2012-07-01 false Classifying authority. 1602.8 Section 1602.8 National Defense Other Regulations Relating to National Defense SELECTIVE SERVICE SYSTEM DEFINITIONS § 1602.8 Classifying authority. The term classifying authority refers to any official or board who...

  7. 32 CFR 1602.8 - Classifying authority.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 6 2011-07-01 2011-07-01 false Classifying authority. 1602.8 Section 1602.8 National Defense Other Regulations Relating to National Defense SELECTIVE SERVICE SYSTEM DEFINITIONS § 1602.8 Classifying authority. The term classifying authority refers to any official or board who...

  8. Cellular Phone Enabled Non-Invasive Tissue Classifier

    PubMed Central

    Laufer, Shlomi; Rubinsky, Boris

    2009-01-01

    Cellular phone technology is emerging as an important tool in the effort to provide advanced medical care to the majority of the world population currently without access to such care. In this study, we show that non-invasive electrical measurements and the use of classifier software can be combined with cellular phone technology to produce inexpensive tissue characterization. This concept was demonstrated by the use of a Support Vector Machine (SVM) classifier to distinguish through the cellular phone between heart and kidney tissue via the non-invasive multi-frequency electrical measurements acquired around the tissues. After the measurements were performed at a remote site, the raw data were transmitted through the cellular phone to a central computational site and the classifier was applied to the raw data. The results of the tissue analysis were returned to the remote data measurement site. The classifiers correctly determined the tissue type with a specificity of over 90%. When used for the detection of malignant tumors, classifiers can be designed to produce false positives in order to ensure that no tumors will be missed. This mode of operation has applications in remote non-invasive tissue diagnostics in situ in the body, in combination with medical imaging, as well as in remote diagnostics of biopsy samples in vitro. PMID:19365554

  9. Fibonacci Numbers and Computer Algorithms.

    ERIC Educational Resources Information Center

    Atkins, John; Geist, Robert

    1987-01-01

    The Fibonacci Sequence describes a vast array of phenomena from nature. Computer scientists have discovered and used many algorithms which can be classified as applications of Fibonacci's sequence. In this article, several of these applications are considered. (PK)

  10. K-D Decision Tree: An Accelerated and Memory Efficient Nearest Neighbor Classifier

    NASA Astrophysics Data System (ADS)

    Shibata, Tomoyuki; Wada, Toshikazu

    This paper presents a novel algorithm for Nearest Neighbor (NN) classifier. NN classification is a well-known method of pattern classification having the following properties: * it performs maximum-margin classification and achieves less than twice the ideal Bayesian error, * it does not require knowledge of pattern distributions, kernel functions or base classifiers, and * it can naturally be applied to multiclass classification problems. Among the drawbacks are A) inefficient memory use and B) ineffective pattern classification speed. This paper deals with the problems A and B. In most cases, NN search algorithms, such as k-d tree, are employed as a pattern search engine of the NN classifier. However, NN classification does not always require the NN search. Based on this idea, we propose a novel algorithm named k-d decision tree (KDDT). Since KDDT uses Voronoi-condensed prototypes, it consumes less memory than naive NN classifiers. We have confirmed that KDDT is much faster than NN search-based classifier through a comparative experiment (from 9 to 369 times faster than NN search based classifier). Furthermore, in order to extend applicability of the KDDT algorithm to high-dimensional NN classification, we modified it by incorporating Gabriel editing or RNG editing instead of Voronoi condensing. Through experiments using simulated and real data, we have confirmed the modified KDDT algorithms are superior to the original one.

  11. A configurable-hardware document-similarity classifier to detect web attacks.

    SciTech Connect

    Ulmer, Craig D.; Gokhale, Maya

    2010-04-01

    This paper describes our approach to adapting a text document similarity classifier based on the Term Frequency Inverse Document Frequency (TFIDF) metric to reconfigurable hardware. The TFIDF classifier is used to detect web attacks in HTTP data. In our reconfigurable hardware approach, we design a streaming, real-time classifier by simplifying an existing sequential algorithm and manipulating the classifier's model to allow decision information to be represented compactly. We have developed a set of software tools to help automate the process of converting training data to synthesizable hardware and to provide a means of trading off between accuracy and resource utilization. The Xilinx Virtex 5-LX implementation requires two orders of magnitude less memory than the original algorithm. At 166MB/s (80X the software) the hardware implementation is able to achieve Gigabit network throughput at the same accuracy as the original algorithm.

  12. An algorithm to discover gene signatures with predictive potential

    PubMed Central

    2010-01-01

    Background The advent of global gene expression profiling has generated unprecedented insight into our molecular understanding of cancer, including breast cancer. For example, human breast cancer patients display significant diversity in terms of their survival, recurrence, metastasis as well as response to treatment. These patient outcomes can be predicted by the transcriptional programs of their individual breast tumors. Predictive gene signatures allow us to correctly classify human breast tumors into various risk groups as well as to more accurately target therapy to ensure more durable cancer treatment. Results Here we present a novel algorithm to generate gene signatures with predictive potential. The method first classifies the expression intensity for each gene as determined by global gene expression profiling as low, average or high. The matrix containing the classified data for each gene is then used to score the expression of each gene based its individual ability to predict the patient characteristic of interest. Finally, all examined genes are ranked based on their predictive ability and the most highly ranked genes are included in the master gene signature, which is then ready for use as a predictor. This method was used to accurately predict the survival outcomes in a cohort of human breast cancer patients. Conclusions We confirmed the capacity of our algorithm to generate gene signatures with bona fide predictive ability. The simplicity of our algorithm will enable biological researchers to quickly generate valuable gene signatures without specialized software or extensive bioinformatics training. PMID:20813028

  13. Signature extension through the application of cluster matching algorithms to determine appropriate signature transformations

    NASA Technical Reports Server (NTRS)

    Lambeck, P. F.; Rice, D. P.

    1976-01-01

    Signature extension is intended to increase the space-time range over which a set of training statistics can be used to classify data without significant loss of recognition accuracy. A first cluster matching algorithm MASC (Multiplicative and Additive Signature Correction) was developed at the Environmental Research Institute of Michigan to test the concept of using associations between training and recognition area cluster statistics to define an average signature transformation. A more recent signature extension module CROP-A (Cluster Regression Ordered on Principal Axis) has shown evidence of making significant associations between training and recognition area cluster statistics, with the clusters to be matched being selected automatically by the algorithm.

  14. Electroweak Corrections

    NASA Astrophysics Data System (ADS)

    Barbieri, Riccardo

    2016-10-01

    The test of the electroweak corrections has played a major role in providing evidence for the gauge and the Higgs sectors of the Standard Model. At the same time the consideration of the electroweak corrections has given significant indirect information on the masses of the top and the Higgs boson before their discoveries and important orientation/constraints on the searches for new physics, still highly valuable in the present situation. The progression of these contributions is reviewed.

  15. Image classifiers for the cell transformation assay: a progress report

    NASA Astrophysics Data System (ADS)

    Urani, Chiara; Crosta, Giovanni F.; Procaccianti, Claudio; Melchioretto, Pasquale; Stefanini, Federico M.

    2010-02-01

    The Cell Transformation Assay (CTA) is one of the promising in vitro methods used to predict human carcinogenicity. The neoplastic phenotype is monitored in suitable cells by the formation of foci and observed by light microscopy after staining. Foci exhibit three types of morphological alterations: Type I, characterized by partially transformed cells, and Types II and III considered to have undergone neoplastic transformation. Foci recognition and scoring have always been carried visually by a trained human expert. In order to automatically classify foci images one needs to implement some image understanding algorithm. Herewith, two such algorithms are described and compared by performance. The supervised classifier (as described in previous articles) relies on principal components analysis embedded in a training feedback loop to process the morphological descriptors extracted by "spectrum enhancement" (SE). The unsupervised classifier architecture is based on the "partitioning around medoids" and is applied to image descriptors taken from histogram moments (HM). Preliminary results suggest the inadequacy of the HMs as image descriptors as compared to those from SE. A justification derived from elementary arguments of real analysis is provided in the Appendix.

  16. Evolving a Bayesian Classifier for ECG-based Age Classification in Medical Applications.

    PubMed

    Wiggins, M; Saad, A; Litt, B; Vachtsevanos, G

    2008-01-01

    OBJECTIVE: To classify patients by age based upon information extracted from their electro-cardiograms (ECGs). To develop and compare the performance of Bayesian classifiers. METHODS AND MATERIAL: We present a methodology for classifying patients according to statistical features extracted from their ECG signals using a genetically evolved Bayesian network classifier. Continuous signal feature variables are converted to a discrete symbolic form by thresholding, to lower the dimensionality of the signal. This simplifies calculation of conditional probability tables for the classifier, and makes the tables smaller. Two methods of network discovery from data were developed and compared: the first using a greedy hill-climb search and the second employed evolutionary computing using a genetic algorithm (GA). RESULTS AND CONCLUSIONS: The evolved Bayesian network performed better (86.25% AUC) than both the one developed using the greedy algorithm (65% AUC) and the naïve Bayesian classifier (84.75% AUC). The methodology for evolving the Bayesian classifier can be used to evolve Bayesian networks in general thereby identifying the dependencies among the variables of interest. Those dependencies are assumed to be non-existent by naïve Bayesian classifiers. Such a classifier can then be used for medical applications for diagnosis and prediction purposes.

  17. Evolving a Bayesian Classifier for ECG-based Age Classification in Medical Applications

    PubMed Central

    Wiggins, M.; Saad, A.; Litt, B.; Vachtsevanos, G.

    2010-01-01

    Objective To classify patients by age based upon information extracted from their electro-cardiograms (ECGs). To develop and compare the performance of Bayesian classifiers. Methods and Material We present a methodology for classifying patients according to statistical features extracted from their ECG signals using a genetically evolved Bayesian network classifier. Continuous signal feature variables are converted to a discrete symbolic form by thresholding, to lower the dimensionality of the signal. This simplifies calculation of conditional probability tables for the classifier, and makes the tables smaller. Two methods of network discovery from data were developed and compared: the first using a greedy hill-climb search and the second employed evolutionary computing using a genetic algorithm (GA). Results and Conclusions The evolved Bayesian network performed better (86.25% AUC) than both the one developed using the greedy algorithm (65% AUC) and the naïve Bayesian classifier (84.75% AUC). The methodology for evolving the Bayesian classifier can be used to evolve Bayesian networks in general thereby identifying the dependencies among the variables of interest. Those dependencies are assumed to be non-existent by naïve Bayesian classifiers. Such a classifier can then be used for medical applications for diagnosis and prediction purposes. PMID:22010038

  18. Comparing different classifiers for automatic age estimation.

    PubMed

    Lanitis, Andreas; Draganova, Chrisina; Christodoulou, Chris

    2004-02-01

    We describe a quantitative evaluation of the performance of different classifiers in the task of automatic age estimation. In this context, we generate a statistical model of facial appearance, which is subsequently used as the basis for obtaining a compact parametric description of face images. The aim of our work is to design classifiers that accept the model-based representation of unseen images and produce an estimate of the age of the person in the corresponding face image. For this application, we have tested different classifiers: a classifier based on the use of quadratic functions for modeling the relationship between face model parameters and age, a shortest distance classifier, and artificial neural network based classifiers. We also describe variations to the basic method where we use age-specific and/or appearance specific age estimation methods. In this context, we use age estimation classifiers for each age group and/or classifiers for different clusters of subjects within our training set. In those cases, part of the classification procedure is devoted to choosing the most appropriate classifier for the subject/age range in question, so that more accurate age estimates can be obtained. We also present comparative results concerning the performance of humans and computers in the task of age estimation. Our results indicate that machines can estimate the age of a person almost as reliably as humans.

  19. Weighted Hybrid Decision Tree Model for Random Forest Classifier

    NASA Astrophysics Data System (ADS)

    Kulkarni, Vrushali Y.; Sinha, Pradeep K.; Petare, Manisha C.

    2016-06-01

    Random Forest is an ensemble, supervised machine learning algorithm. An ensemble generates many classifiers and combines their results by majority voting. Random forest uses decision tree as base classifier. In decision tree induction, an attribute split/evaluation measure is used to decide the best split at each node of the decision tree. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation among them. The work presented in this paper is related to attribute split measures and is a two step process: first theoretical study of the five selected split measures is done and a comparison matrix is generated to understand pros and cons of each measure. These theoretical results are verified by performing empirical analysis. For empirical analysis, random forest is generated using each of the five selected split measures, chosen one at a time. i.e. random forest using information gain, random forest using gain ratio, etc. The next step is, based on this theoretical and empirical analysis, a new approach of hybrid decision tree model for random forest classifier is proposed. In this model, individual decision tree in Random Forest is generated using different split measures. This model is augmented by weighted voting based on the strength of individual tree. The new approach has shown notable increase in the accuracy of random forest.

  20. Classifying Spectra Based on DLS and Rough Set

    NASA Astrophysics Data System (ADS)

    Qiu, Bo; Hu, Zhanyi; Zhao, Yongheng

    2003-01-01

    Until now, it is still difficult to identify different kinds of celestial bodies depending on their spectra, because it needs a lot of astronomers" manual work of measuring, marking and identifying, which is generally very hard and time-consuming. And with the exploding spectral data from all kinds of telescopes, it is becoming more and more urgent to find a thoroughly automatic way to deal with such a kind of problem. In fact, when we change our viewpoint, we can find that it is a traditional problem in pattern recognition field when considering the whole process of dealing with spectral signals: filtering noises, extracting features, constructing classifiers, etc. The main purpose for automatic classification and recognition of spectra in LAMOST (Large Sky Area Multi-Object Fibre Spectroscopic Telescope) project is to identify a celestial body"s type only based on its spectrum. For this purpose, one of the key steps is to establish a good model to describe all kinds of spectra and thus it will be available to construct some excellent classifiers. In this paper, we present a novel describing language to represent spectra. And then, based on the language, we use some algorithms to extract classifying rules from raw spectra datasets and then construct classifiers to identify spectra by using rough set method. Compared with other methods, our technique is more similar to man"s thinking way, and to some extent, efficient.

  1. Confidence measure and performance evaluation for HRRR-based classifiers

    NASA Astrophysics Data System (ADS)

    Rago, Constantino; Zajic, Tim; Huff, Melvyn; Mehra, Raman K.; Mahler, Ronald P. S.; Noviskey, Michael J.

    2002-07-01

    The work presented here is a continuation of research first reported in Mahler, et. al. Our earlier efforts included integrating the Statistical Features algorithm with a Bayesian nonlinear filter, allowing simultaneous determination of target position, velocity, pose and type via maximum a posteriori estimation. We then considered three alternative classifiers: the first based on a principal component decomposition, the second on a linear discriminant approach, and the third on a wavelet representation. In addition, preliminary results were given with regards to assigning a measure of confidence to the output of the wavelet based classifier. In this paper we continue to address the problem of target classification based on high range resolution radar signatures. In particular, we examine the performance of a variant of the principal component based classifier as the number of principal components is varied. We have chosen to quantify the performance in terms of the Bhattacharyya distance. We also present further results regarding the assignment of confidence values to the output of the wavelet based classifier.

  2. Developing collaborative classifiers using an expert-based model

    USGS Publications Warehouse

    Mountrakis, G.; Watts, R.; Luo, L.; Wang, Jingyuan

    2009-01-01

    This paper presents a hierarchical, multi-stage adaptive strategy for image classification. We iteratively apply various classification methods (e.g., decision trees, neural networks), identify regions of parametric and geographic space where accuracy is low, and in these regions, test and apply alternate methods repeating the process until the entire image is classified. Currently, classifiers are evaluated through human input using an expert-based system; therefore, this paper acts as the proof of concept for collaborative classifiers. Because we decompose the problem into smaller, more manageable sub-tasks, our classification exhibits increased flexibility compared to existing methods since classification methods are tailored to the idiosyncrasies of specific regions. A major benefit of our approach is its scalability and collaborative support since selected low-accuracy classifiers can be easily replaced with others without affecting classification accuracy in high accuracy areas. At each stage, we develop spatially explicit accuracy metrics that provide straightforward assessment of results by non-experts and point to areas that need algorithmic improvement or ancillary data. Our approach is demonstrated in the task of detecting impervious surface areas, an important indicator for human-induced alterations to the environment, using a 2001 Landsat scene from Las Vegas, Nevada. ?? 2009 American Society for Photogrammetry and Remote Sensing.

  3. Generating fuzzy rules for constructing interpretable classifier of diabetes disease.

    PubMed

    Settouti, Nesma; Chikh, M Amine; Saidi, Meryem

    2012-09-01

    Diabetes is a type of disease in which the body fails to regulate the amount of glucose necessary for the body. It does not allow the body to produce or properly use insulin. Diabetes has widespread fallout, with a large people affected by it in world. In this paper; we demonstrate that a fuzzy c-means-neuro-fuzzy rule-based classifier of diabetes disease with an acceptable interpretability is obtained. The accuracy of the classifier is measured by the number of correctly recognized diabetes record while its complexity is measured by the number of fuzzy rules extracted. Experimental results show that the proposed fuzzy classifier can achieve a good tradeoff between the accuracy and interpretability. Also the basic structure of the fuzzy rules which were automatically extracted from the UCI Machine learning database shows strong similarities to the rules applied by human experts. Results are compared to other approaches in the literature. The proposed approach gives more compact, interpretable and accurate classifier.

  4. Development of the Landsat Data Continuity Mission Cloud Cover Assessment Algorithms

    USGS Publications Warehouse

    Scaramuzza, Pat; Bouchard, M.A.; Dwyer, J.L.

    2012-01-01

    The upcoming launch of the Operational Land Imager (OLI) will start the next era of the Landsat program. However, the Automated Cloud-Cover Assessment (CCA) (ACCA) algorithm used on Landsat 7 requires a thermal band and is thus not suited for OLI. There will be a thermal instrument on the Landsat Data Continuity Mission (LDCM)-the Thermal Infrared Sensor-which may not be available during all OLI collections. This illustrates a need for CCA for LDCM in the absence of thermal data. To research possibilities for full-resolution OLI cloud assessment, a global data set of 207 Landsat 7 scenes with manually generated cloud masks was created. It was used to evaluate the ACCA algorithm, showing that the algorithm correctly classified 79.9% of a standard test subset of 3.95 109 pixels. The data set was also used to develop and validate two successor algorithms for use with OLI data-one derived from an off-the-shelf machine learning package and one based on ACCA but enhanced by a simple neural network. These comprehensive CCA algorithms were shown to correctly classify pixels as cloudy or clear 88.5% and 89.7% of the time, respectively.

  5. Unsupervised Online Classifier in Sleep Scoring for Sleep Deprivation Studies

    PubMed Central

    Libourel, Paul-Antoine; Corneyllie, Alexandra; Luppi, Pierre-Hervé; Chouvet, Guy; Gervasoni, Damien

    2015-01-01

    Study Objective: This study was designed to evaluate an unsupervised adaptive algorithm for real-time detection of sleep and wake states in rodents. Design: We designed a Bayesian classifier that automatically extracts electroencephalogram (EEG) and electromyogram (EMG) features and categorizes non-overlapping 5-s epochs into one of the three major sleep and wake states without any human supervision. This sleep-scoring algorithm is coupled online with a new device to perform selective paradoxical sleep deprivation (PSD). Settings: Controlled laboratory settings for chronic polygraphic sleep recordings and selective PSD. Participants: Ten adult Sprague-Dawley rats instrumented for chronic polysomnographic recordings Measurements: The performance of the algorithm is evaluated by comparison with the score obtained by a human expert reader. Online detection of PS is then validated with a PSD protocol with duration of 72 hours. Results: Our algorithm gave a high concordance with human scoring with an average κ coefficient > 70%. Notably, the specificity to detect PS reached 92%. Selective PSD using real-time detection of PS strongly reduced PS amounts, leaving only brief PS bouts necessary for the detection of PS in EEG and EMG signals (4.7 ± 0.7% over 72 h, versus 8.9 ± 0.5% in baseline), and was followed by a significant PS rebound (23.3 ± 3.3% over 150 minutes). Conclusions: Our fully unsupervised data-driven algorithm overcomes some limitations of the other automated methods such as the selection of representative descriptors or threshold settings. When used online and coupled with our sleep deprivation device, it represents a better option for selective PSD than other methods like the tedious gentle handling or the platform method. Citation: Libourel PA, Corneyllie A, Luppi PH, Chouvet G, Gervasoni D. Unsupervised online classifier in sleep scoring for sleep deprivation studies. SLEEP 2015;38(5):815–828. PMID:25325478

  6. An Active Learning Classifier for Further Reducing Diabetic Retinopathy Screening System Cost

    PubMed Central

    An, Mingqiang

    2016-01-01

    Diabetic retinopathy (DR) screening system raises a financial problem. For further reducing DR screening cost, an active learning classifier is proposed in this paper. Our approach identifies retinal images based on features extracted by anatomical part recognition and lesion detection algorithms. Kernel extreme learning machine (KELM) is a rapid classifier for solving classification problems in high dimensional space. Both active learning and ensemble technique elevate performance of KELM when using small training dataset. The committee only proposes necessary manual work to doctor for saving cost. On the publicly available Messidor database, our classifier is trained with 20%–35% of labeled retinal images and comparative classifiers are trained with 80% of labeled retinal images. Results show that our classifier can achieve better classification accuracy than Classification and Regression Tree, radial basis function SVM, Multilayer Perceptron SVM, Linear SVM, and K Nearest Neighbor. Empirical experiments suggest that our active learning classifier is efficient for further reducing DR screening cost. PMID:27660645

  7. Combining classifiers using their receiver operating characteristics and maximum likelihood estimation.

    PubMed

    Haker, Steven; Wells, William M; Warfield, Simon K; Talos, Ion-Florin; Bhagwat, Jui G; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H

    2005-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging.

  8. An Active Learning Classifier for Further Reducing Diabetic Retinopathy Screening System Cost

    PubMed Central

    An, Mingqiang

    2016-01-01

    Diabetic retinopathy (DR) screening system raises a financial problem. For further reducing DR screening cost, an active learning classifier is proposed in this paper. Our approach identifies retinal images based on features extracted by anatomical part recognition and lesion detection algorithms. Kernel extreme learning machine (KELM) is a rapid classifier for solving classification problems in high dimensional space. Both active learning and ensemble technique elevate performance of KELM when using small training dataset. The committee only proposes necessary manual work to doctor for saving cost. On the publicly available Messidor database, our classifier is trained with 20%–35% of labeled retinal images and comparative classifiers are trained with 80% of labeled retinal images. Results show that our classifier can achieve better classification accuracy than Classification and Regression Tree, radial basis function SVM, Multilayer Perceptron SVM, Linear SVM, and K Nearest Neighbor. Empirical experiments suggest that our active learning classifier is efficient for further reducing DR screening cost.

  9. Statistical and Machine-Learning Classifier Framework to Improve Pulse Shape Discrimination System Design

    SciTech Connect

    Wurtz, R.; Kaplan, A.

    2015-10-28

    Pulse shape discrimination (PSD) is a variety of statistical classifier. Fully-­realized statistical classifiers rely on a comprehensive set of tools for designing, building, and implementing. PSD advances rely on improvements to the implemented algorithm. PSD advances can be improved by using conventional statistical classifier or machine learning methods. This paper provides the reader with a glossary of classifier-­building elements and their functions in a fully-­designed and operational classifier framework that can be used to discover opportunities for improving PSD classifier projects. This paper recommends reporting the PSD classifier’s receiver operating characteristic (ROC) curve and its behavior at a gamma rejection rate (GRR) relevant for realistic applications.

  10. Combining classifiers using their receiver operating characteristics and maximum likelihood estimation.

    PubMed

    Haker, Steven; Wells, William M; Warfield, Simon K; Talos, Ion-Florin; Bhagwat, Jui G; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H

    2005-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884

  11. Combining Classifiers Using Their Receiver Operating Characteristics and Maximum Likelihood Estimation*

    PubMed Central

    Haker, Steven; Wells, William M.; Warfield, Simon K.; Talos, Ion-Florin; Bhagwat, Jui G.; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H.

    2010-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884

  12. Standardizing the Protocol for Hemispherical Photographs: Accuracy Assessment of Binarization Algorithms

    PubMed Central

    Glatthorn, Jonas; Beckschäfer, Philip

    2014-01-01

    Hemispherical photography is a well-established method to optically assess ecological parameters related to plant canopies; e.g. ground-level light regimes and the distribution of foliage within the crown space. Interpreting hemispherical photographs involves classifying pixels as either sky or vegetation. A wide range of automatic thresholding or binarization algorithms exists to classify the photographs. The variety in methodology hampers ability to compare results across studies. To identify an optimal threshold selection method, this study assessed the accuracy of seven binarization methods implemented in software currently available for the processing of hemispherical photographs. Therefore, binarizations obtained by the algorithms were compared to reference data generated through a manual binarization of a stratified random selection of pixels. This approach was adopted from the accuracy assessment of map classifications known from remote sensing studies. Percentage correct () and kappa-statistics () were calculated. The accuracy of the algorithms was assessed for photographs taken with automatic exposure settings (auto-exposure) and photographs taken with settings which avoid overexposure (histogram-exposure). In addition, gap fraction values derived from hemispherical photographs were compared with estimates derived from the manually classified reference pixels. All tested algorithms were shown to be sensitive to overexposure. Three of the algorithms showed an accuracy which was high enough to be recommended for the processing of histogram-exposed hemispherical photographs: “Minimum” ( 98.8%; 0.952), “Edge Detection” ( 98.1%; 0.950), and “Minimum Histogram” ( 98.1%; 0.947). The Minimum algorithm overestimated gap fraction least of all (11%). The overestimation by the algorithms Edge Detection (63%) and Minimum Histogram (67%) were considerably larger. For the remaining four evaluated algorithms (IsoData, Maximum Entropy, MinError, and Otsu) an

  13. Standardizing the protocol for hemispherical photographs: accuracy assessment of binarization algorithms.

    PubMed

    Glatthorn, Jonas; Beckschäfer, Philip

    2014-01-01

    Hemispherical photography is a well-established method to optically assess ecological parameters related to plant canopies; e.g. ground-level light regimes and the distribution of foliage within the crown space. Interpreting hemispherical photographs involves classifying pixels as either sky or vegetation. A wide range of automatic thresholding or binarization algorithms exists to classify the photographs. The variety in methodology hampers ability to compare results across studies. To identify an optimal threshold selection method, this study assessed the accuracy of seven binarization methods implemented in software currently available for the processing of hemispherical photographs. Therefore, binarizations obtained by the algorithms were compared to reference data generated through a manual binarization of a stratified random selection of pixels. This approach was adopted from the accuracy assessment of map classifications known from remote sensing studies. Percentage correct (Pc) and kappa-statistics (K) were calculated. The accuracy of the algorithms was assessed for photographs taken with automatic exposure settings (auto-exposure) and photographs taken with settings which avoid overexposure (histogram-exposure). In addition, gap fraction values derived from hemispherical photographs were compared with estimates derived from the manually classified reference pixels. All tested algorithms were shown to be sensitive to overexposure. Three of the algorithms showed an accuracy which was high enough to be recommended for the processing of histogram-exposed hemispherical photographs: "Minimum" (Pc 98.8%; K 0.952), "Edge Detection" (Pc 98.1%; K 0.950), and "Minimum Histogram" (Pc 98.1%; K 0.947). The Minimum algorithm overestimated gap fraction least of all (11%). The overestimation by the algorithms Edge Detection (63%) and Minimum Histogram (67%) were considerably larger. For the remaining four evaluated algorithms (IsoData, Maximum Entropy, MinError, and Otsu

  14. Walking Objectively Measured: Classifying Accelerometer Data with GPS and Travel Diaries

    PubMed Central

    Kang, Bumjoon; Moudon, Anne V.; Hurvitz, Philip M.; Reichley, Lucas; Saelens, Brian E.

    2013-01-01

    Purpose This study developed and tested an algorithm to classify accelerometer data as walking or non-walking using either GPS or travel diary data within a large sample of adults under free-living conditions. Methods Participants wore an accelerometer and a GPS unit, and concurrently completed a travel diary for 7 consecutive days. Physical activity (PA) bouts were identified using accelerometry count sequences. PA bouts were then classified as walking or non-walking based on a decision-tree algorithm consisting of 7 classification scenarios. Algorithm reliability was examined relative to two independent analysts’ classification of a 100-bout verification sample. The algorithm was then applied to the entire set of PA bouts. Results The 706 participants’ (mean age 51 years, 62% female, 80% non-Hispanic white, 70% college graduate or higher) yielded 4,702 person-days of data and had a total of 13,971 PA bouts. The algorithm showed a mean agreement of 95% with the independent analysts. It classified physical activity into 8,170 (58.5 %) walking bouts and 5,337 (38.2%) non-walking bouts; 464 (3.3%) bouts were not classified for lack of GPS and diary data. Nearly 70% of the walking bouts and 68% of the non-walking bouts were classified using only the objective accelerometer and GPS data. Travel diary data helped classify 30% of all bouts with no GPS data. The mean duration of PA bouts classified as walking was 15.2 min (SD=12.9). On average, participants had 1.7 walking bouts and 25.4 total walking minutes per day. Conclusions GPS and travel diary information can be helpful in classifying most accelerometer-derived PA bouts into walking or non-walking behavior. PMID:23439414

  15. Classifier Subset Selection for the Stacked Generalization Method Applied to Emotion Recognition in Speech.

    PubMed

    Álvarez, Aitor; Sierra, Basilio; Arruti, Andoni; López-Gil, Juan-Miguel; Garay-Vitoria, Nestor

    2015-01-01

    In this paper, a new supervised classification paradigm, called classifier subset selection for stacked generalization (CSS stacking), is presented to deal with speech emotion recognition. The new approach consists of an improvement of a bi-level multi-classifier system known as stacking generalization by means of an integration of an estimation of distribution algorithm (EDA) in the first layer to select the optimal subset from the standard base classifiers. The good performance of the proposed new paradigm was demonstrated over different configurations and datasets. First, several CSS stacking classifiers were constructed on the RekEmozio dataset, using some specific standard base classifiers and a total of 123 spectral, quality and prosodic features computed using in-house feature extraction algorithms. These initial CSS stacking classifiers were compared to other multi-classifier systems and the employed standard classifiers built on the same set of speech features. Then, new CSS stacking classifiers were built on RekEmozio using a different set of both acoustic parameters (extended version of the Geneva Minimalistic Acoustic Parameter Set (eGeMAPS)) and standard classifiers and employing the best meta-classifier of the initial experiments. The performance of these two CSS stacking classifiers was evaluated and compared. Finally, the new paradigm was tested on the well-known Berlin Emotional Speech database. We compared the performance of single, standard stacking and CSS stacking systems using the same parametrization of the second phase. All of the classifications were performed at the categorical level, including the six primary emotions plus the neutral one. PMID:26712757

  16. Classifier Subset Selection for the Stacked Generalization Method Applied to Emotion Recognition in Speech

    PubMed Central

    Álvarez, Aitor; Sierra, Basilio; Arruti, Andoni; López-Gil, Juan-Miguel; Garay-Vitoria, Nestor

    2015-01-01

    In this paper, a new supervised classification paradigm, called classifier subset selection for stacked generalization (CSS stacking), is presented to deal with speech emotion recognition. The new approach consists of an improvement of a bi-level multi-classifier system known as stacking generalization by means of an integration of an estimation of distribution algorithm (EDA) in the first layer to select the optimal subset from the standard base classifiers. The good performance of the proposed new paradigm was demonstrated over different configurations and datasets. First, several CSS stacking classifiers were constructed on the RekEmozio dataset, using some specific standard base classifiers and a total of 123 spectral, quality and prosodic features computed using in-house feature extraction algorithms. These initial CSS stacking classifiers were compared to other multi-classifier systems and the employed standard classifiers built on the same set of speech features. Then, new CSS stacking classifiers were built on RekEmozio using a different set of both acoustic parameters (extended version of the Geneva Minimalistic Acoustic Parameter Set (eGeMAPS)) and standard classifiers and employing the best meta-classifier of the initial experiments. The performance of these two CSS stacking classifiers was evaluated and compared. Finally, the new paradigm was tested on the well-known Berlin Emotional Speech database. We compared the performance of single, standard stacking and CSS stacking systems using the same parametrization of the second phase. All of the classifications were performed at the categorical level, including the six primary emotions plus the neutral one. PMID:26712757

  17. Automatic class labeling of classified imagery using a hyperspectral library

    NASA Astrophysics Data System (ADS)

    Parshakov, Ilia

    Image classification is a fundamental information extraction procedure in remote sensing that is used in land-cover and land-use mapping. Despite being considered as a replacement for manual mapping, it still requires some degree of analyst intervention. This makes the process of image classification time consuming, subjective, and error prone. For example, in unsupervised classification, pixels are automatically grouped into classes, but the user has to manually label the classes as one land-cover type or another. As a general rule, the larger the number of classes, the more difficult it is to assign meaningful class labels. A fully automated post-classification procedure for class labeling was developed in an attempt to alleviate this problem. It labels spectral classes by matching their spectral characteristics with reference spectra. A Landsat TM image of an agricultural area was used for performance assessment. The algorithm was used to label a 20- and 100-class image generated by the ISODATA classifier. The 20-class image was used to compare the technique with the traditional manual labeling of classes, and the 100-class image was used to compare it with the Spectral Angle Mapper and Maximum Likelihood classifiers. The proposed technique produced a map that had an overall accuracy of 51%, outperforming the manual labeling (40% to 45% accuracy, depending on the analyst performing the labeling) and the Spectral Angle Mapper classifier (39%), but underperformed compared to the Maximum Likelihood technique (53% to 63%). The newly developed class-labeling algorithm provided better results for alfalfa, beans, corn, grass and sugar beet, whereas canola, corn, fallow, flax, potato, and wheat were identified with similar or lower accuracy, depending on the classifier it was compared with.

  18. A candidate plasma protein classifier to identify Alzheimer's disease.

    PubMed

    Zhao, Xuemei; Lejnine, Serguei; Spond, Jeffrey; Zhang, Chunsheng; Ramaraj, T C; Holder, Daniel J; Dai, Hongyue; Weiner, Russell; Laterza, Omar F

    2015-01-01

    Biomarkers currently used in the aid for the diagnosis of Alzheimer's disease (AD) are cerebrospinal fluid (CSF) protein markers and brain neuroimaging markers. These biomarkers, however, either involve semi-invasive procedures or are costly to measure. Thus, AD biomarkers from more easily accessible body fluids, such as plasma, are very enticing. Using an aptamer-based proteomic technology, we profiled 1,129 plasma proteins of AD patients and non-demented control individuals. A 5-protein classifier for AD identification was constructed in the discovery study with excellent 10-fold cross-validation performance (90.1% sensitivity, 84.2% specificity, 87.9% accuracy, and AUC as 0.94). In an independent validation study, the classifier was applied and correctly predicted AD with 100.0% sensitivity, 80.0% specificity, and 90.0% accuracy, matching or outperforming the CSF Aβ42 and tau biomarkers whose performance were assessed in individual-matched CSF samples obtained at the same visit as plasma sample collection. Moreover, the classifier also correctly predicted mild cognitive impairment, an early pre-dementia state of the disease, with 96.7% sensitivity, 80.0% specificity, and 92.5% accuracy. These studies demonstrate that plasma proteins could be used effectively and accurately to contribute to the clinical diagnosis of AD. Although additional and more diverse cohorts are needed for further validation of the robustness, including the support of postmortem diagnosis, the 5-protein classifier appears to be a promising blood test to contribute diagnosis of AD. PMID:25114072

  19. A candidate plasma protein classifier to identify Alzheimer's disease.

    PubMed

    Zhao, Xuemei; Lejnine, Serguei; Spond, Jeffrey; Zhang, Chunsheng; Ramaraj, T C; Holder, Daniel J; Dai, Hongyue; Weiner, Russell; Laterza, Omar F

    2015-01-01

    Biomarkers currently used in the aid for the diagnosis of Alzheimer's disease (AD) are cerebrospinal fluid (CSF) protein markers and brain neuroimaging markers. These biomarkers, however, either involve semi-invasive procedures or are costly to measure. Thus, AD biomarkers from more easily accessible body fluids, such as plasma, are very enticing. Using an aptamer-based proteomic technology, we profiled 1,129 plasma proteins of AD patients and non-demented control individuals. A 5-protein classifier for AD identification was constructed in the discovery study with excellent 10-fold cross-validation performance (90.1% sensitivity, 84.2% specificity, 87.9% accuracy, and AUC as 0.94). In an independent validation study, the classifier was applied and correctly predicted AD with 100.0% sensitivity, 80.0% specificity, and 90.0% accuracy, matching or outperforming the CSF Aβ42 and tau biomarkers whose performance were assessed in individual-matched CSF samples obtained at the same visit as plasma sample collection. Moreover, the classifier also correctly predicted mild cognitive impairment, an early pre-dementia state of the disease, with 96.7% sensitivity, 80.0% specificity, and 92.5% accuracy. These studies demonstrate that plasma proteins could be used effectively and accurately to contribute to the clinical diagnosis of AD. Although additional and more diverse cohorts are needed for further validation of the robustness, including the support of postmortem diagnosis, the 5-protein classifier appears to be a promising blood test to contribute diagnosis of AD.

  20. 28 CFR 700.14 - Classified information.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Classified information. 700.14 Section... INFORMATION OF THE OFFICE OF INDEPENDENT COUNSEL Protection of Privacy and Access to Individual Records Under the Privacy Act of 1974 § 700.14 Classified information. In processing a request for access to...

  1. 28 CFR 700.14 - Classified information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Classified information. 700.14 Section... INFORMATION OF THE OFFICE OF INDEPENDENT COUNSEL Protection of Privacy and Access to Individual Records Under the Privacy Act of 1974 § 700.14 Classified information. In processing a request for access to...

  2. 28 CFR 16.7 - Classified information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... processing a request for information that is classified under Executive Order 12958 (3 CFR, 1996 Comp., p... 28 Judicial Administration 1 2013-07-01 2013-07-01 false Classified information. 16.7 Section 16.7 Judicial Administration DEPARTMENT OF JUSTICE PRODUCTION OR DISCLOSURE OF MATERIAL OR...

  3. 28 CFR 16.44 - Classified information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 1 2013-07-01 2013-07-01 false Classified information. 16.44 Section 16.44 Judicial Administration DEPARTMENT OF JUSTICE PRODUCTION OR DISCLOSURE OF MATERIAL OR INFORMATION... information. In processing a request for access to a record containing information that is classified...

  4. 28 CFR 61.8 - Classified proposals.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Classified proposals. 61.8 Section 61.8 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) PROCEDURES FOR IMPLEMENTING THE NATIONAL ENVIRONMENTAL POLICY ACT Implementing Procedures § 61.8 Classified proposals. If an environmental...

  5. 28 CFR 61.8 - Classified proposals.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Classified proposals. 61.8 Section 61.8 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) PROCEDURES FOR IMPLEMENTING THE NATIONAL ENVIRONMENTAL POLICY ACT Implementing Procedures § 61.8 Classified proposals. If an environmental...

  6. 28 CFR 61.8 - Classified proposals.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Classified proposals. 61.8 Section 61.8 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) PROCEDURES FOR IMPLEMENTING THE NATIONAL ENVIRONMENTAL POLICY ACT Implementing Procedures § 61.8 Classified proposals. If an environmental...

  7. 28 CFR 61.8 - Classified proposals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Classified proposals. 61.8 Section 61.8 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) PROCEDURES FOR IMPLEMENTING THE NATIONAL ENVIRONMENTAL POLICY ACT Implementing Procedures § 61.8 Classified proposals. If an environmental...

  8. 28 CFR 61.8 - Classified proposals.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Classified proposals. 61.8 Section 61.8 Judicial Administration DEPARTMENT OF JUSTICE (CONTINUED) PROCEDURES FOR IMPLEMENTING THE NATIONAL ENVIRONMENTAL POLICY ACT Implementing Procedures § 61.8 Classified proposals. If an environmental...

  9. 6 CFR 5.24 - Classified information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false Classified information. 5.24 Section 5.24 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY DISCLOSURE OF RECORDS AND INFORMATION Privacy Act § 5.24 Classified information. In processing a request for access to a...

  10. 6 CFR 5.7 - Classified information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... classified under Executive Order 12958 (3 CFR, 1996 Comp., p. 333) or any other executive order, the... 6 Domestic Security 1 2010-01-01 2010-01-01 false Classified information. 5.7 Section 5.7 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY DISCLOSURE OF RECORDS AND...

  11. Deconvolution When Classifying Noisy Data Involving Transformations

    PubMed Central

    Carroll, Raymond; Delaigle, Aurore; Hall, Peter

    2013-01-01

    In the present study, we consider the problem of classifying spatial data distorted by a linear transformation or convolution and contaminated by additive random noise. In this setting, we show that classifier performance can be improved if we carefully invert the data before the classifier is applied. However, the inverse transformation is not constructed so as to recover the original signal, and in fact, we show that taking the latter approach is generally inadvisable. We introduce a fully data-driven procedure based on cross-validation, and use several classifiers to illustrate numerical properties of our approach. Theoretical arguments are given in support of our claims. Our procedure is applied to data generated by light detection and ranging (Lidar) technology, where we improve on earlier approaches to classifying aerosols. This article has supplementary materials online. PMID:23606778

  12. Development and Testing of Data Mining Algorithms for Earth Observation

    NASA Technical Reports Server (NTRS)

    Glymour, Clark

    2005-01-01

    The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.

  13. Cancer Classification in Microarray Data using a Hybrid Selective Independent Component Analysis and υ-Support Vector Machine Algorithm.

    PubMed

    Saberkari, Hamidreza; Shamsi, Mousa; Joroughi, Mahsa; Golabi, Faegheh; Sedaaghi, Mohammad Hossein

    2014-10-01

    Microarray data have an important role in identification and classification of the cancer tissues. Having a few samples of microarrays in cancer researches is always one of the most concerns which lead to some problems in designing the classifiers. For this matter, preprocessing gene selection techniques should be utilized before classification to remove the noninformative genes from the microarray data. An appropriate gene selection method can significantly improve the performance of cancer classification. In this paper, we use selective independent component analysis (SICA) for decreasing the dimension of microarray data. Using this selective algorithm, we can solve the instability problem occurred in the case of employing conventional independent component analysis (ICA) methods. First, the reconstruction error and selective set are analyzed as independent components of each gene, which have a small part in making error in order to reconstruct new sample. Then, some of the modified support vector machine (υ-SVM) algorithm sub-classifiers are trained, simultaneously. Eventually, the best sub-classifier with the highest recognition rate is selected. The proposed algorithm is applied on three cancer datasets (leukemia, breast cancer and lung cancer datasets), and its results are compared with other existing methods. The results illustrate that the proposed algorithm (SICA + υ-SVM) has higher accuracy and validity in order to increase the classification accuracy. Such that, our proposed algorithm exhibits relative improvements of 3.3% in correctness rate over ICA + SVM and SVM algorithms in lung cancer dataset.

  14. Cancer Classification in Microarray Data using a Hybrid Selective Independent Component Analysis and υ-Support Vector Machine Algorithm

    PubMed Central

    Saberkari, Hamidreza; Shamsi, Mousa; Joroughi, Mahsa; Golabi, Faegheh; Sedaaghi, Mohammad Hossein

    2014-01-01

    Microarray data have an important role in identification and classification of the cancer tissues. Having a few samples of microarrays in cancer researches is always one of the most concerns which lead to some problems in designing the classifiers. For this matter, preprocessing gene selection techniques should be utilized before classification to remove the noninformative genes from the microarray data. An appropriate gene selection method can significantly improve the performance of cancer classification. In this paper, we use selective independent component analysis (SICA) for decreasing the dimension of microarray data. Using this selective algorithm, we can solve the instability problem occurred in the case of employing conventional independent component analysis (ICA) methods. First, the reconstruction error and selective set are analyzed as independent components of each gene, which have a small part in making error in order to reconstruct new sample. Then, some of the modified support vector machine (υ-SVM) algorithm sub-classifiers are trained, simultaneously. Eventually, the best sub-classifier with the highest recognition rate is selected. The proposed algorithm is applied on three cancer datasets (leukemia, breast cancer and lung cancer datasets), and its results are compared with other existing methods. The results illustrate that the proposed algorithm (SICA + υ-SVM) has higher accuracy and validity in order to increase the classification accuracy. Such that, our proposed algorithm exhibits relative improvements of 3.3% in correctness rate over ICA + SVM and SVM algorithms in lung cancer dataset. PMID:25426433

  15. Enhancing atlas based segmentation with multiclass linear classifiers

    SciTech Connect

    Sdika, Michaël

    2015-12-15

    Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible local registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy.

  16. Quantitative Assessment of Magnetic Sensor Signal Processing Algorithms in a Wireless Tongue-Operated Assistive Technology

    PubMed Central

    Ayala-Acevedo, Abner; Ghovanloo, Maysam

    2015-01-01

    In this paper, we evaluate the overall performance of various magnetic-sensor signal processing (mSSP) algorithms for the Tongue Drive System based on a comprehensive dataset collected from trials with a total of eight able-bodied subjects. More specifically, we measure the performance of nine classifiers on the two-stage classification used by the mSSP algorithm, in order to learn how to improve the current algorithm. Results show that is it possible to reduce misclassification error from 5.95% and 20.13% to 3.98% and 5.63%, from the two assessed datasets, respectively, without sacrificing correctness. Furthermore, since the mSSP algorithm must run in real time, the results show where to focus the computational resources when they are constrained by the platforms with limited resources, such as smartphones. PMID:23366729

  17. Block-classified motion compensation scheme for digital video

    SciTech Connect

    Zafar, S.; Zhang, Ya-Qin; Jabbari, B.

    1996-03-01

    A novel scheme for block-based motion compensation is introduced in which a block is classified according to the energy that is directly related to the motion activity it represents. This classification allows more flexibility in controlling the bit rate arid the signal-to-noise ratio and results in a reduction in motion search complexity. The method introduced is not dependent on the particular type of motion search algorithm implemented and can thus be used with any method assuming that the underlying matching criteria used is minimum absolute difference. It has been shown that the method is superior to a simple motion compensation algorithm where all blocks are motion compensated regardless of the energy resulting after the displaced difference.

  18. Organizational coevolutionary classifiers with fuzzy logic used in intrusion detection

    NASA Astrophysics Data System (ADS)

    Chen, Zhenguo

    2009-07-01

    Intrusion detection is an important technique in the defense-in-depth network security framework and a hot topic in computer security in recent years. To solve the intrusion detection question, we introduce the fuzzy logic into Organization CoEvolutionary algorithm [1] and present the algorithm of Organization CoEvolutionary Classification with Fuzzy Logic. In this paper, we give an intrusion detection models based on Organization CoEvolutionary Classification with Fuzzy Logic. After illustrating our model with a representative dataset and applying it to the real-world network datasets KDD Cup 1999. The experimental result shown that the intrusion detection based on Organizational Coevolutionary Classifiers with Fuzzy Logic can give higher recognition accuracy than the general method.

  19. Logarithmic learning for generalized classifier neural network.

    PubMed

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network.

  20. Effects of cultural characteristics on building an emotion classifier through facial expression analysis

    NASA Astrophysics Data System (ADS)

    da Silva, Flávio Altinier Maximiano; Pedrini, Helio

    2015-03-01

    Facial expressions are an important demonstration of humanity's humors and emotions. Algorithms capable of recognizing facial expressions and associating them with emotions were developed and employed to compare the expressions that different cultural groups use to show their emotions. Static pictures of predominantly occidental and oriental subjects from public datasets were used to train machine learning algorithms, whereas local binary patterns, histogram of oriented gradients (HOGs), and Gabor filters were employed to describe the facial expressions for six different basic emotions. The most consistent combination, formed by the association of HOG filter and support vector machines, was then used to classify the other cultural group: there was a strong drop in accuracy, meaning that the subtle differences of facial expressions of each culture affected the classifier performance. Finally, a classifier was trained with images from both occidental and oriental subjects and its accuracy was higher on multicultural data, evidencing the need of a multicultural training set to build an efficient classifier.

  1. Comparison of approaches to classifier fusion for improving mine detection/classification performance

    NASA Astrophysics Data System (ADS)

    Bello, Martin G.

    2002-08-01

    We describe here the current form of Alphatech's image processing and neural network based algorithms for detection and classification of mines in side-scan sonar imagery, and results obtained from their application. In particular, drawing on the Machine Learning literature, we contrast here results obtained from employing the bagging and boosting methods for classifier fusion, in the attempt to obtain more desirable performance characteristics than that achieved with single classifiers.

  2. Recognition of multiple imbalanced cancer types based on DNA microarray data using ensemble classifiers.

    PubMed

    Yu, Hualong; Hong, Shufang; Yang, Xibei; Ni, Jun; Dan, Yuanyuan; Qin, Bin

    2013-01-01

    DNA microarray technology can measure the activities of tens of thousands of genes simultaneously, which provides an efficient way to diagnose cancer at the molecular level. Although this strategy has attracted significant research attention, most studies neglect an important problem, namely, that most DNA microarray datasets are skewed, which causes traditional learning algorithms to produce inaccurate results. Some studies have considered this problem, yet they merely focus on binary-class problem. In this paper, we dealt with multiclass imbalanced classification problem, as encountered in cancer DNA microarray, by using ensemble learning. We utilized one-against-all coding strategy to transform multiclass to multiple binary classes, each of them carrying out feature subspace, which is an evolving version of random subspace that generates multiple diverse training subsets. Next, we introduced one of two different correction technologies, namely, decision threshold adjustment or random undersampling, into each training subset to alleviate the damage of class imbalance. Specifically, support vector machine was used as base classifier, and a novel voting rule called counter voting was presented for making a final decision. Experimental results on eight skewed multiclass cancer microarray datasets indicate that unlike many traditional classification approaches, our methods are insensitive to class imbalance. PMID:24078908

  3. Receiver operating characteristic for a spectrogram correlator-based humpback whale detector-classifier.

    PubMed

    Abbot, Ted A; Premus, Vincent E; Abbot, Philip A; Mayer, Owen A

    2012-09-01

    This paper presents recent experimental results and a discussion of system enhancements made to the real-time autonomous humpback whale detector-classifier algorithm first presented by Abbot et al. [J. Acoust. Soc. Am. 127, 2894-2903 (2010)]. In February 2010, a second-generation system was deployed in an experiment conducted off of leeward Kauai during which 26 h of humpback vocalizations were recorded via sonobuoy and processed in real time. These data have been analyzed along with 40 h of humpbacks-absent data collected from the same location during July-August 2009. The extensive whales-absent data set in particular has enabled the quantification of system false alarm rates and the measurement of receiver operating characteristic curves. The performance impact of three enhancements incorporated into the second-generation system are discussed, including (1) a method to eliminate redundancy in the kernel library, (2) increased use of contextual analysis, and (3) the augmentation of the training data with more recent humpback vocalizations. It will be shown that the performance of the real-time system was improved to yield a probability of correct classification of 0.93 and a probability of false alarm of 0.004 over the 66 h of independent test data.

  4. Artificial neural networks for classifying olfactory signals.

    PubMed

    Linder, R; Pöppl, S J

    2000-01-01

    For practical applications, artificial neural networks have to meet several requirements: Mainly they should learn quick, classify accurate and behave robust. Programs should be user-friendly and should not need the presence of an expert for fine tuning diverse learning parameters. The present paper demonstrates an approach using an oversized network topology, adaptive propagation (APROP), a modified error function, and averaging outputs of four networks described for the first time. As an example, signals from different semiconductor gas sensors of an electronic nose were classified. The electronic nose smelt different types of edible oil with extremely different a-priori-probabilities. The fully-specified neural network classifier fulfilled the above mentioned demands. The new approach will be helpful not only for classifying olfactory signals automatically but also in many other fields in medicine, e.g. in data mining from medical databases.

  5. How Is Acute Lymphocytic Leukemia Classified?

    MedlinePlus

    ... How is acute lymphocytic leukemia treated? How is acute lymphocytic leukemia classified? Most types of cancers are assigned numbered ... ALL are now named as follows: B-cell ALL Early pre-B ALL (also called pro-B ...

  6. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., (50 U.S.C. 401) Executive Order 12958 provides the only basis for classifying information. Information...) Top Secret. This classification shall be applied only to information the unauthorized disclosure...

  7. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., (50 U.S.C. 401) Executive Order 12958 provides the only basis for classifying information. Information...) Top Secret. This classification shall be applied only to information the unauthorized disclosure...

  8. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., (50 U.S.C. 401) Executive Order 12958 provides the only basis for classifying information. Information...) Top Secret. This classification shall be applied only to information the unauthorized disclosure...

  9. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., (50 U.S.C. 401) Executive Order 12958 provides the only basis for classifying information. Information...) Top Secret. This classification shall be applied only to information the unauthorized disclosure...

  10. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., (50 U.S.C. 401) Executive Order 12958 provides the only basis for classifying information. Information...) Top Secret. This classification shall be applied only to information the unauthorized disclosure...

  11. Correction of Facial Deformity in Sturge–Weber Syndrome

    PubMed Central

    Yamaguchi, Kazuaki; Lonic, Daniel; Chen, Chit

    2016-01-01

    Background: Although previous studies have reported soft-tissue management in surgical treatment of Sturge–Weber syndrome (SWS), there are few reports describing facial bone surgery in this patient group. The purpose of this study is to examine the validity of our multidisciplinary algorithm for correcting facial deformities associated with SWS. To the best of our knowledge, this is the first study on orthognathic surgery for SWS patients. Methods: A retrospective chart review included 2 SWS patients who completed the surgical treatment algorithm. Radiographic and clinical data were recorded, and a treatment algorithm was derived. Results: According to the Roach classification, the first patient was classified as type I presenting with both facial and leptomeningeal vascular anomalies without glaucoma and the second patient as type II presenting only with a hemifacial capillary malformation. Considering positive findings in seizure history and intracranial vascular anomalies in the first case, the anesthetic management was modified to omit hypotensive anesthesia because of the potential risk of intracranial pressure elevation. Primarily, both patients underwent 2-jaw orthognathic surgery and facial bone contouring including genioplasty, zygomatic reduction, buccal fat pad removal, and masseter reduction without major complications. In the second step, the volume and distribution of facial soft tissues were altered by surgical resection and reposition. Both patients were satisfied with the surgical result. Conclusions: Our multidisciplinary algorithm can systematically detect potential risk factors. Correction of the asymmetric face by successive bone and soft-tissue surgery enables the patients to reduce their psychosocial burden and increase their quality of life. PMID:27622111

  12. Design of partially supervised classifiers for multispectral image data

    NASA Technical Reports Server (NTRS)

    Jeon, Byeungwoo; Landgrebe, David

    1993-01-01

    A partially supervised classification problem is addressed, especially when the class definition and corresponding training samples are provided a priori only for just one particular class. In practical applications of pattern classification techniques, a frequently observed characteristic is the heavy, often nearly impossible requirements on representative prior statistical class characteristics of all classes in a given data set. Considering the effort in both time and man-power required to have a well-defined, exhaustive list of classes with a corresponding representative set of training samples, this 'partially' supervised capability would be very desirable, assuming adequate classifier performance can be obtained. Two different classification algorithms are developed to achieve simplicity in classifier design by reducing the requirement of prior statistical information without sacrificing significant classifying capability. The first one is based on optimal significance testing, where the optimal acceptance probability is estimated directly from the data set. In the second approach, the partially supervised classification is considered as a problem of unsupervised clustering with initially one known cluster or class. A weighted unsupervised clustering procedure is developed to automatically define other classes and estimate their class statistics. The operational simplicity thus realized should make these partially supervised classification schemes very viable tools in pattern classification.

  13. Evaluation of Bayesian network to classify clustered microcalcifications

    NASA Astrophysics Data System (ADS)

    Patrocinio, Ana C.; Schiabel, Homero; Romero, Roseli A. F.

    2004-05-01

    The purpose of this work is the evaluation and analysis of Bayesian network models in order to classify clusters of microcalcifications to supply a second opinion to the specialists in the detection of breast diseases by mammography. Bayesian networks are statistics techniques, which provide explanation about the inferences and influences among features and classes of a determinated problem. Therefore, the technique investigation will aid in obtaining more detailed information to the diagnosis in a CAD scheme. From regions of interest (ROI), containing clusters of microcalcifications, detailed image analysis, pixel to pixel; in this step shape using geometric descriptors (Hu Invariant Moments, second and third order moments and radius gyration); irregularity measure; compactness; area and perimeter extracted descriptors. By using software of Bayesian network models construction, different Bayesian network classifier models could be generated, using the extracted features mentioned above in order to verify their behavior and probabilistic influences and used as the input to Bayesian network, some tests were performed in order to build the classifier. The results of generated nets models validation correspond to an average of 10 tests made with 6 different database sub-groups. The first results of validation have shown 83.17% of correct results.

  14. Comparable performance for classifier trained on real or synthetic IR-images

    NASA Astrophysics Data System (ADS)

    Weber, Bruce A.; Penn, Joseph A.

    2001-10-01

    We report results that demonstrate that an infrared (IR) target classifier, trained on synthetic-images of targets and tested on real-images, can perform as well as a classifier trained on real-images alone. We also demonstrate that the sum of real and synthetic-image databases can be used to train a classifier whose performance exceeds that of classifiers trained on either database alone. After creating a large database of 80,000 synthetic-images two subset databases of 7,000 and 8,000 images were selected and used to train and test a classifier against two comparably sized, sequestered databases of real-images. Synthetic-image selection was accomplished using classifiers trained on real-images from the sequestered real-image databases. The images were chosen if they were correctly identified for both target and target aspect. Results suggest that subsets of synthetic-images can be chosen to selectively train target classifiers for specific locations and operational scenarios; and that it should be possible to train classifiers on synthetic-images that outperform classifiers trained on real-images alone.

  15. Construction of Pancreatic Cancer Classifier Based on SVM Optimized by Improved FOA

    PubMed Central

    Jiang, Huiyan; Zhao, Di; Zheng, Ruiping; Ma, Xiaoqi

    2015-01-01

    A novel method is proposed to establish the pancreatic cancer classifier. Firstly, the concept of quantum and fruit fly optimal algorithm (FOA) are introduced, respectively. Then FOA is improved by quantum coding and quantum operation, and a new smell concentration determination function is defined. Finally, the improved FOA is used to optimize the parameters of support vector machine (SVM) and the classifier is established by optimized SVM. In order to verify the effectiveness of the proposed method, SVM and other classification methods have been chosen as the comparing methods. The experimental results show that the proposed method can improve the classifier performance and cost less time. PMID:26543867

  16. Construction of Pancreatic Cancer Classifier Based on SVM Optimized by Improved FOA.

    PubMed

    Jiang, Huiyan; Zhao, Di; Zheng, Ruiping; Ma, Xiaoqi

    2015-01-01

    A novel method is proposed to establish the pancreatic cancer classifier. Firstly, the concept of quantum and fruit fly optimal algorithm (FOA) are introduced, respectively. Then FOA is improved by quantum coding and quantum operation, and a new smell concentration determination function is defined. Finally, the improved FOA is used to optimize the parameters of support vector machine (SVM) and the classifier is established by optimized SVM. In order to verify the effectiveness of the proposed method, SVM and other classification methods have been chosen as the comparing methods. The experimental results show that the proposed method can improve the classifier performance and cost less time.

  17. PPCM: Combing Multiple Classifiers to Improve Protein-Protein Interaction Prediction

    DOE PAGES

    Yao, Jianzhuang; Guo, Hong; Yang, Xiaohan

    2015-01-01

    Determining protein-protein interaction (PPI) in biological systems is of considerable importance, and prediction of PPI has become a popular research area. Although different classifiers have been developed for PPI prediction, no single classifier seems to be able to predict PPI with high confidence. We postulated that by combining individual classifiers the accuracy of PPI prediction could be improved. We developed a method called protein-protein interaction prediction classifiers merger (PPCM), and this method combines output from two PPI prediction tools, GO2PPI and Phyloprof, using Random Forests algorithm. The performance of PPCM was tested by area under the curve (AUC) using anmore » assembled Gold Standard database that contains both positive and negative PPI pairs. Our AUC test showed that PPCM significantly improved the PPI prediction accuracy over the corresponding individual classifiers. We found that additional classifiers incorporated into PPCM could lead to further improvement in the PPI prediction accuracy. Furthermore, cross species PPCM could achieve competitive and even better prediction accuracy compared to the single species PPCM. This study established a robust pipeline for PPI prediction by integrating multiple classifiers using Random Forests algorithm. This pipeline will be useful for predicting PPI in nonmodel species.« less

  18. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  19. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  20. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity.

  1. Localization Algorithms of Underwater Wireless Sensor Networks: A Survey

    PubMed Central

    Han, Guangjie; Jiang, Jinfang; Shu, Lei; Xu, Yongjun; Wang, Feng

    2012-01-01

    In Underwater Wireless Sensor Networks (UWSNs), localization is one of most important technologies since it plays a critical role in many applications. Motivated by widespread adoption of localization, in this paper, we present a comprehensive survey of localization algorithms. First, we classify localization algorithms into three categories based on sensor nodes’ mobility: stationary localization algorithms, mobile localization algorithms and hybrid localization algorithms. Moreover, we compare the localization algorithms in detail and analyze future research directions of localization algorithms in UWSNs. PMID:22438752

  2. A new method for classifying different phenotypes of kidney transplantation.

    PubMed

    Zhu, Dong; Liu, Zexian; Pan, Zhicheng; Qian, Mengjia; Wang, Linyan; Zhu, Tongyu; Xue, Yu; Wu, Duojiao

    2016-08-01

    For end-stage renal diseases, kidney transplantation is the most efficient treatment. However, the unexpected rejection caused by inflammation usually leads to allograft failure. Thus, a systems-level characterization of inflammation factors can provide potentially diagnostic biomarkers for predicting renal allograft rejection. Serum of kidney transplant patients with different immune status were collected and classified as transplant patients with stable renal function (ST), impaired renal function with negative biopsy pathology (UNST), acute rejection (AR), and chronic rejection (CR). The expression profiles of 40 inflammatory proteins were measured by quantitative protein microarrays and reduced to a lower dimensional space by the partial least squares (PLS) model. The determined principal components (PCs) were then trained by the support vector machines (SVMs) algorithm for classifying different phenotypes of kidney transplantation. There were 30, 16, and 13 inflammation proteins that showed statistically significant differences between CR and ST, CR and AR, and CR and UNST patients. Further analysis revealed a protein-protein interaction (PPI) network among 33 inflammatory proteins and proposed a potential role of intracellular adhesion molecule-1 (ICAM-1) in CR. Based on the network analysis and protein expression information, two PCs were determined as the major contributors and trained by the PLS-SVMs method, with a promising accuracy of 77.5 % for classification of chronic rejection after kidney transplantation. For convenience, we also developed software packages of GPS-CKT (Classification phenotype of Kidney Transplantation Predictor) for classifying phenotypes. By confirming a strong correlation between inflammation and kidney transplantation, our results suggested that the network biomarker but not single factors can potentially classify different phenotypes in kidney transplantation. PMID:27278387

  3. Influence of atmospheric correction on image classification for irrigated agriculture in the Lower Colorado River Basin

    NASA Astrophysics Data System (ADS)

    Wei, X.

    2012-12-01

    Atmospheric correction is essential for accurate quantitative information retrieval from satellite imagery. In this paper, we applied the atmospheric correction algorithm, Second Simulation of a Satellite Signal in the Solar Spectrum (6S) radiative transfer code, to retrieve surface reflectance from Landsat 5 Thematic Mapper (TM) imagery for the Palo Verde Irrigation District (PVID) within the lower Colorado River basin. The 6S code was implemented with the input data of visibility, aerosol optical depth, pressure, temperature, water vapour, and ozone from local measurements. The 6S corrected image of PVID was classified into the irrigated agriculture of alfalfa, cotton, melons, corn, grass, and vegetables. We performed multiple classification methods of maximum likelihood, fuzzy means, and object-oriented classification methods. Using field crop type data, we conducted accuracy assessment for the results from 6S corrected image and uncorrected image and found a consistent improvement of classification accuracy for 6S corrected image. The study proves that 6S code is a robust atmospheric correction method in providing a better simulation of surface reflectance and improving image classification accuracy.;

  4. A new approach to identify, classify and count drugrelated events

    PubMed Central

    Bürkle, Thomas; Müller, Fabian; Patapovas, Andrius; Sonst, Anja; Pfistermeister, Barbara; Plank-Kiegele, Bettina; Dormann, Harald; Maas, Renke

    2013-01-01

    Aims The incidence of clinical events related to medication errors and/or adverse drug reactions reported in the literature varies by a degree that cannot solely be explained by the clinical setting, the varying scrutiny of investigators or varying definitions of drug-related events. Our hypothesis was that the individual complexity of many clinical cases may pose relevant limitations for current definitions and algorithms used to identify, classify and count adverse drug-related events. Methods Based on clinical cases derived from an observational study we identified and classified common clinical problems that cannot be adequately characterized by the currently used definitions and algorithms. Results It appears that some key models currently used to describe the relation of medication errors (MEs), adverse drug reactions (ADRs) and adverse drug events (ADEs) can easily be misinterpreted or contain logical inconsistencies that limit their accurate use to all but the simplest clinical cases. A key limitation of current models is the inability to deal with complex interactions such as one drug causing two clinically distinct side effects or multiple drugs contributing to a single clinical event. Using a large set of clinical cases we developed a revised model of the interdependence between MEs, ADEs and ADRs and extended current event definitions when multiple medications cause multiple types of problems. We propose algorithms that may help to improve the identification, classification and counting of drug-related events. Conclusions The new model may help to overcome some of the limitations that complex clinical cases pose to current paper- or software-based drug therapy safety. PMID:24007453

  5. A web-based neurological pain classifier tool utilizing Bayesian decision theory for pain classification in spinal cord injury patients

    NASA Astrophysics Data System (ADS)

    Verma, Sneha K.; Chun, Sophia; Liu, Brent J.

    2014-03-01

    Pain is a common complication after spinal cord injury with prevalence estimates ranging 77% to 81%, which highly affects a patient's lifestyle and well-being. In the current clinical setting paper-based forms are used to classify pain correctly, however, the accuracy of diagnoses and optimal management of pain largely depend on the expert reviewer, which in many cases is not possible because of very few experts in this field. The need for a clinical decision support system that can be used by expert and non-expert clinicians has been cited in literature, but such a system has not been developed. We have designed and developed a stand-alone tool for correctly classifying pain type in spinal cord injury (SCI) patients, using Bayesian decision theory. Various machine learning simulation methods are used to verify the algorithm using a pilot study data set, which consists of 48 patients data set. The data set consists of the paper-based forms, collected at Long Beach VA clinic with pain classification done by expert in the field. Using the WEKA as the machine learning tool we have tested on the 48 patient dataset that the hypothesis that attributes collected on the forms and the pain location marked by patients have very significant impact on the pain type classification. This tool will be integrated with an imaging informatics system to support a clinical study that will test the effectiveness of using Proton Beam radiotherapy for treating spinal cord injury (SCI) related neuropathic pain as an alternative to invasive surgical lesioning.

  6. What are the differences between Bayesian classifiers and mutual-information classifiers?

    PubMed

    Hu, Bao-Gang

    2014-02-01

    In this paper, both Bayesian and mutual-information classifiers are examined for binary classifications with or without a reject option. The general decision rules are derived for Bayesian classifiers with distinctions on error types and reject types. A formal analysis is conducted to reveal the parameter redundancy of cost terms when abstaining classifications are enforced. The redundancy implies an intrinsic problem of nonconsistency for interpreting cost terms. If no data are given to the cost terms, we demonstrate the weakness of Bayesian classifiers in class-imbalanced classifications. On the contrary, mutual-information classifiers are able to provide an objective solution from the given data, which shows a reasonable balance among error types and reject types. Numerical examples of using two types of classifiers are given for confirming the differences, including the extremely class-imbalanced cases. Finally, we briefly summarize the Bayesian and mutual-information classifiers in terms of their application advantages and disadvantages, respectively.

  7. A CORRECTION.

    PubMed

    Johnson, D

    1940-03-22

    IN a recently published volume on "The Origin of Submarine Canyons" the writer inadvertently credited to A. C. Veatch an excerpt from a submarine chart actually contoured by P. A. Smith, of the U. S. Coast and Geodetic Survey. The chart in question is Chart IVB of Special Paper No. 7 of the Geological Society of America entitled "Atlantic Submarine Valleys of the United States and the Congo Submarine Valley, by A. C. Veatch and P. A. Smith," and the excerpt appears as Plate III of the volume fist cited above. In view of the heavy labor involved in contouring the charts accompanying the paper by Veatch and Smith and the beauty of the finished product, it would be unfair to Mr. Smith to permit the error to go uncorrected. Excerpts from two other charts are correctly ascribed to Dr. Veatch. PMID:17839404

  8. A CORRECTION.

    PubMed

    Johnson, D

    1940-03-22

    IN a recently published volume on "The Origin of Submarine Canyons" the writer inadvertently credited to A. C. Veatch an excerpt from a submarine chart actually contoured by P. A. Smith, of the U. S. Coast and Geodetic Survey. The chart in question is Chart IVB of Special Paper No. 7 of the Geological Society of America entitled "Atlantic Submarine Valleys of the United States and the Congo Submarine Valley, by A. C. Veatch and P. A. Smith," and the excerpt appears as Plate III of the volume fist cited above. In view of the heavy labor involved in contouring the charts accompanying the paper by Veatch and Smith and the beauty of the finished product, it would be unfair to Mr. Smith to permit the error to go uncorrected. Excerpts from two other charts are correctly ascribed to Dr. Veatch.

  9. Optimal classifier feedback improves cost-benefit but not base-rate decision criterion learning in perceptual categorization.

    PubMed

    Maddox, W Todd; Bohil, Corey J

    2005-03-01

    Unequal payoffs engender separate reward- and accuracy-maximizing decision criteria; unequal base rates do not. When payoffs are unequal, observers place greater emphasis on accuracy than is optimal. This study compares objective classifier (the objectively correct response) with optimal classifier feedback (the optimal classifier's response) when payoffs or base rates are unequal. It provides a critical test of Maddox and Bohil's (1998) competition between reward and accuracy maximization (COBRA) hypothesis, comparing it with a competition between reward and probability matching (COBRM) and a competition between reward and equal response frequencies (COBRE) hypothesis. The COBRA prediction that optimal classifier feedback leads to better decision criterion leaning relative to objective classifier feedback when payoffs are unequal, but not when base rates are unequal, was supported. Model-based analyses suggested that the weight placed on accuracy was reduced for optimal classifier feedback relative to objective classifier feedback. In addition, delayed feedback affected learning of the reward-maximizing decision criterion.

  10. Pulmonary nodule detection using a cascaded SVM classifier

    NASA Astrophysics Data System (ADS)

    Bergtholdt, Martin; Wiemker, Rafael; Klinder, Tobias

    2016-03-01

    Automatic detection of lung nodules from chest CT has been researched intensively over the last decades resulting also in several commercial products. However, solutions are adopted only slowly into daily clinical routine as many current CAD systems still potentially miss true nodules while at the same time generating too many false positives (FP). While many earlier approaches had to rely on rather few cases for development, larger databases become now available and can be used for algorithmic development. In this paper, we address the problem of lung nodule detection via a cascaded SVM classifier. The idea is to sequentially perform two classification tasks in order to select from an extremely large pool of potential candidates the few most likely ones. As the initial pool is allowed to contain thousands of candidates, very loose criteria could be applied during this pre-selection. In this way, the chances that a true nodule is falsely rejected as a candidate are reduced significantly. The final algorithm is trained and tested on the full LIDC/IDRI database. Comparison is done against two previously published CAD systems. Overall, the algorithm achieved sensitivity of 0.859 at 2.5 FP/volume where the other two achieved sensitivity values of 0.321 and 0.625, respectively. On low dose data sets, only slight increase in the number of FP/volume was observed, while the sensitivity was not affected.

  11. A nonparametric classifier for unsegmented text

    NASA Astrophysics Data System (ADS)

    Nagy, George; Joshi, Ashutosh; Krishnamoorthy, Mukkai; Lin, Yu; Lopresti, Daniel P.; Mehta, Shashank; Seth, Sharad

    2003-12-01

    Symbolic Indirect Correlation (SIC) is a new classification method for unsegmented patterns. SIC requires two levels of comparisons. First, the feature sequences from an unknown query signal and a known multi-pattern reference signal are matched. Then, the order of the matched features is compared with the order of matches between every lexicon symbol-string and the reference string in the lexical domain. The query is classified according to the best matching lexicon string in the second comparison. Accuracy increases as classified feature-and-symbol strings are added to the reference string.

  12. A survey of decision tree classifier methodology

    NASA Technical Reports Server (NTRS)

    Safavian, S. Rasoul; Landgrebe, David

    1990-01-01

    Decision Tree Classifiers (DTC's) are used successfully in many diverse areas such as radar signal classification, character recognition, remote sensing, medical diagnosis, expert systems, and speech recognition. Perhaps, the most important feature of DTC's is their capability to break down a complex decision-making process into a collection of simpler decisions, thus providing a solution which is often easier to interpret. A survey of current methods is presented for DTC designs and the various existing issue. After considering potential advantages of DTC's over single stage classifiers, subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed.

  13. Spectral classifier design with ensemble classifiers and misclassification-rejection: application to elastic-scattering spectroscopy for detection of colonic neoplasia

    PubMed Central

    Rodriguez-Diaz, Eladio; Castanon, David A.; Singh, Satish K.; Bigio, Irving J.

    2011-01-01

    Optical spectroscopy has shown potential as a real-time, in vivo, diagnostic tool for identifying neoplasia during endoscopy. We present the development of a diagnostic algorithm to classify elastic-scattering spectroscopy (ESS) spectra as either neoplastic or non-neoplastic. The algorithm is based on pattern recognition methods, including ensemble classifiers, in which members of the ensemble are trained on different regions of the ESS spectrum, and misclassification-rejection, where the algorithm identifies and refrains from classifying samples that are at higher risk of being misclassified. These “rejected” samples can be reexamined by simply repositioning the probe to obtain additional optical readings or ultimately by sending the polyp for histopathological assessment, as per standard practice. Prospective validation using separate training and testing sets result in a baseline performance of sensitivity = .83, specificity = .79, using the standard framework of feature extraction (principal component analysis) followed by classification (with linear support vector machines). With the developed algorithm, performance improves to Se ∼ 0.90, Sp ∼ 0.90, at a cost of rejecting 20–33% of the samples. These results are on par with a panel of expert pathologists. For colonoscopic prevention of colorectal cancer, our system could reduce biopsy risk and cost, obviate retrieval of non-neoplastic polyps, decrease procedure time, and improve assessment of cancer risk. PMID:21721830

  14. Spectral classifier design with ensemble classifiers and misclassification-rejection: application to elastic-scattering spectroscopy for detection of colonic neoplasia

    NASA Astrophysics Data System (ADS)

    Rodriguez-Diaz, Eladio; Castanon, David A.; Singh, Satish K.; Bigio, Irving J.

    2011-06-01

    Optical spectroscopy has shown potential as a real-time, in vivo, diagnostic tool for identifying neoplasia during endoscopy. We present the development of a diagnostic algorithm to classify elastic-scattering spectroscopy (ESS) spectra as either neoplastic or non-neoplastic. The algorithm is based on pattern recognition methods, including ensemble classifiers, in which members of the ensemble are trained on different regions of the ESS spectrum, and misclassification-rejection, where the algorithm identifies and refrains from classifying samples that are at higher risk of being misclassified. These ``rejected'' samples can be reexamined by simply repositioning the probe to obtain additional optical readings or ultimately by sending the polyp for histopathological assessment, as per standard practice. Prospective validation using separate training and testing sets result in a baseline performance of sensitivity = .83, specificity = .79, using the standard framework of feature extraction (principal component analysis) followed by classification (with linear support vector machines). With the developed algorithm, performance improves to Se ~ 0.90, Sp ~ 0.90, at a cost of rejecting 20-33% of the samples. These results are on par with a panel of expert pathologists. For colonoscopic prevention of colorectal cancer, our system could reduce biopsy risk and cost, obviate retrieval of non-neoplastic polyps, decrease procedure time, and improve assessment of cancer risk.

  15. X-ray scatter correction in breast tomosynthesis with a precomputed scatter map library

    PubMed Central

    Feng, Steve Si Jia; D’Orsi, Carl J.; Newell, Mary S.; Seidel, Rebecca L.; Patel, Bhavika; Sechopoulos, Ioannis

    2014-01-01

    Purpose: To develop and evaluate the impact on lesion conspicuity of a software-based x-ray scatter correction algorithm for digital breast tomosynthesis (DBT) imaging into which a precomputed library of x-ray scatter maps is incorporated. Methods: A previously developed model of compressed breast shapes undergoing mammography based on principal component analysis (PCA) was used to assemble 540 simulated breast volumes, of different shapes and sizes, undergoing DBT. A Monte Carlo (MC) simulation was used to generate the cranio-caudal (CC) view DBT x-ray scatter maps of these volumes, which were then assembled into a library. This library was incorporated into a previously developed software-based x-ray scatter correction, and the performance of this improved algorithm was evaluated with an observer study of 40 patient cases previously classified as BI-RADS® 4 or 5, evenly divided between mass and microcalcification cases. Observers were presented with both the original images and the scatter corrected (SC) images side by side and asked to indicate their preference, on a scale from −5 to +5, in terms of lesion conspicuity and quality of diagnostic features. Scores were normalized such that a negative score indicates a preference for the original images, and a positive score indicates a preference for the SC images. Results: The scatter map library removes the time-intensive MC simulation from the application of the scatter correction algorithm. While only one in four observers preferred the SC DBT images as a whole (combined mean score = 0.169 ± 0.37, p > 0.39), all observers exhibited a preference for the SC images when the lesion examined was a mass (1.06 ± 0.45, p < 0.0001). When the lesion examined consisted of microcalcification clusters, the observers exhibited a preference for the uncorrected images (−0.725 ± 0.51, p < 0.009). Conclusions: The incorporation of the x-ray scatter map library into the scatter correction algorithm improves the efficiency

  16. A hybrid classifier for automated radiologic diagnosis: preliminary results and clinical applications.

    PubMed

    Herskovits, E

    1990-05-01

    We describe the design, implementation, and preliminary evaluation of a computer system to aid clinicians in the interpretation of cranial magnetic-resonance (MR) images. The system classifies normal and pathologic tissues in a test set of MR scans with high accuracy. It also provides a simple, rapid means whereby an unassisted expert may reliably label an image with his best judgment of its histologic composition, yielding a gold-standard image; this step facilitates objective evaluation of classifier performance. This system consists of a preprocessing module; a semiautomatic, reliable procedure for obtaining objective estimates of an expert's opinion of an image's tissue composition; a classification module based on a combination of the maximum-likelihood (ML) classifier and the isodata unsupervised-clustering algorithm; and an evaluation module based on confusion-matrix generation. The algorithms for classifier evaluation and gold-standard acquisition are advances over previous methods. Furthermore, the combination of a clustering algorithm and a statistical classifier provides advantages not found in systems using either method alone.

  17. Shape and Function in Hmong Classifier Choices

    ERIC Educational Resources Information Center

    Sakuragi, Toshiyuki; Fuller, Judith W.

    2013-01-01

    This study examined classifiers in the Hmong language with a particular focus on gaining insights into the underlying cognitive process of categorization. Forty-three Hmong speakers participated in three experiments. In the first experiment, designed to verify the previously postulated configurational (saliently one-dimensional, saliently…

  18. Classifying and quantifying basins of attraction

    SciTech Connect

    Sprott, J. C.; Xiong, Anda

    2015-08-15

    A scheme is proposed to classify the basins for attractors of dynamical systems in arbitrary dimensions. There are four basic classes depending on their size and extent, and each class can be further quantified to facilitate comparisons. The calculation uses a Monte Carlo method and is applied to numerous common dissipative chaotic maps and flows in various dimensions.

  19. The Community; A Classified, Annotated Bibliography.

    ERIC Educational Resources Information Center

    Payne, Raymond, Comp.; Bailey, Wilfrid C., Comp.

    This is a classified retrospective bibliography of 839 items on the community (about 140 are annotated) from rural sociology and agricultural economics departments and sections, agricultural experiment stations, extension services, and related agencies. Items are categorized as follows: bibliography and reference lists; location and delineation of…

  20. Classifying the Context Clues in Children's Text

    ERIC Educational Resources Information Center

    Dowds, Susan J. Parault; Haverback, Heather Rogers; Parkinson, Meghan M.

    2016-01-01

    This study aimed to determine which types of context clues exist in children's texts and whether it is possible for experts to identify reliably those clues. Three experienced coders used Ames' clue set as a foundation for a system to classify context clues in children's text. Findings showed that the adjustments to Ames' system resulted in 15…

  1. 32 CFR 651.13 - Classified actions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... ENVIRONMENTAL ANALYSIS OF ARMY ACTIONS (AR 200-2) National Environmental Policy Act and the Decision Process..., AR 380-5 (Department of the Army Information Security Program) will be followed. (b) Classification... makers in accordance with AR 380-5. (d) When classified information is such an integral part of...

  2. 32 CFR 651.13 - Classified actions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ENVIRONMENTAL ANALYSIS OF ARMY ACTIONS (AR 200-2) National Environmental Policy Act and the Decision Process..., AR 380-5 (Department of the Army Information Security Program) will be followed. (b) Classification... makers in accordance with AR 380-5. (d) When classified information is such an integral part of...

  3. 32 CFR 651.13 - Classified actions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ENVIRONMENTAL ANALYSIS OF ARMY ACTIONS (AR 200-2) National Environmental Policy Act and the Decision Process..., AR 380-5 (Department of the Army Information Security Program) will be followed. (b) Classification... makers in accordance with AR 380-5. (d) When classified information is such an integral part of...

  4. 32 CFR 651.13 - Classified actions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ENVIRONMENTAL ANALYSIS OF ARMY ACTIONS (AR 200-2) National Environmental Policy Act and the Decision Process..., AR 380-5 (Department of the Army Information Security Program) will be followed. (b) Classification... makers in accordance with AR 380-5. (d) When classified information is such an integral part of...

  5. 32 CFR 651.13 - Classified actions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ENVIRONMENTAL ANALYSIS OF ARMY ACTIONS (AR 200-2) National Environmental Policy Act and the Decision Process..., AR 380-5 (Department of the Army Information Security Program) will be followed. (b) Classification... makers in accordance with AR 380-5. (d) When classified information is such an integral part of...

  6. A Proposed System for Classifying Research Universities.

    ERIC Educational Resources Information Center

    Anderson, Robert C.

    A system of classifying research unviersities is proposed based on quantitative criteria. Data from several studies were used to develop a list of 57 leading U.S. research universities. The Carnegie Commission's 1973 and 1976 classification of "Research Universities I" and the Academy for Educational Development's listing are presented, along with…

  7. Supervised segmentation of MRI brain images using combination of multiple classifiers.

    PubMed

    Ahmadvand, Ali; Sharififar, Mohammad; Daliri, Mohammad Reza

    2015-06-01

    Segmentation of different tissues is one of the initial and most critical tasks in different aspects of medical image processing. Manual segmentation of brain images resulted from magnetic resonance imaging is time consuming, so automatic image segmentation is widely used in this area. Ensemble based algorithms are very reliable and generalized methods for classification. In this paper, a supervised method named dynamic classifier selection-dynamic local training local tanimoto index, which is a member of combination of multiple classifiers (CMCs) methods is proposed. The proposed method uses dynamic local training sets instead of a full statics one and also it change the classifier rank criterion properly for brain tissue classification. Selection policy for combining the different decisions is implemented here and the K-nearest neighbor algorithm is used to find the best local classifier. Experimental results show that the proposed method can classify the real datasets of the internet brain segmentation repository better than all single classifiers in ensemble and produces significantly improvement on other CMCs methods. PMID:26130310

  8. Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms

    2002-05-01

    We developed new pattern recognition (PR) algorithms based on a human visual perception model. We named these algorithms Visual Empirical Region of Influence (VERI) algorithms. To compare the new algorithm's effectiveness against othe PR algorithms, we benchmarked their clustering capabilities with a standard set of two-dimensional data that is well known in the PR community. The VERI algorithm succeeded in clustering all the data correctly. No existing algorithm had previously clustered all the pattens inmore » the data set successfully. The commands to execute VERI algorithms are quite difficult to master when executed from a DOS command line. The algorithm requires several parameters to operate correctly. From our own experiences we realized that if we wanted to provide a new data analysis tool to the PR community we would have to provide a new data analysis tool to the PR community we would have to make the tool powerful, yet easy and intuitive to use. That was our motivation for developing graphical user interfaces (GUI's) to the VERI algorithms. We developed GUI's to control the VERI algorithm in a single pass mode and in an optimization mode. We also developed a visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization package is integrated into the single pass interface. Both the single pass interface and optimization interface are part of the PR software package we have developed and make available to other users. The single pass mode only finds PR results for the sets of features in the data set that are manually requested by the user. The optimization model uses a brute force method of searching through the cominations of features in a data set for features that produce

  9. Comparison of wheat classification accuracy using different classifiers of the image-100 system

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.

    1981-01-01

    Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.

  10. Use of genetic algorithm for the selection of EEG features

    NASA Astrophysics Data System (ADS)

    Asvestas, P.; Korda, A.; Kostopoulos, S.; Karanasiou, I.; Ouzounoglou, A.; Sidiropoulos, K.; Ventouras, E.; Matsopoulos, G.

    2015-09-01

    Genetic Algorithm (GA) is a popular optimization technique that can detect the global optimum of a multivariable function containing several local optima. GA has been widely used in the field of biomedical informatics, especially in the context of designing decision support systems that classify biomedical signals or images into classes of interest. The aim of this paper is to present a methodology, based on GA, for the selection of the optimal subset of features that can be used for the efficient classification of Event Related Potentials (ERPs), which are recorded during the observation of correct or incorrect actions. In our experiment, ERP recordings were acquired from sixteen (16) healthy volunteers who observed correct or incorrect actions of other subjects. The brain electrical activity was recorded at 47 locations on the scalp. The GA was formulated as a combinatorial optimizer for the selection of the combination of electrodes that maximizes the performance of the Fuzzy C Means (FCM) classification algorithm. In particular, during the evolution of the GA, for each candidate combination of electrodes, the well-known (Σ, Φ, Ω) features were calculated and were evaluated by means of the FCM method. The proposed methodology provided a combination of 8 electrodes, with classification accuracy 93.8%. Thus, GA can be the basis for the selection of features that discriminate ERP recordings of observations of correct or incorrect actions.

  11. Disassembly and Sanitization of Classified Matter

    SciTech Connect

    Stockham, Dwight J.; Saad, Max P.

    2008-01-15

    The Disassembly Sanitization Operation (DSO) process was implemented to support weapon disassembly and disposition by using recycling and waste minimization measures. This process was initiated by treaty agreements and reconfigurations within both the DOD and DOE Complexes. The DOE is faced with disassembling and disposing of a huge inventory of retired weapons, components, training equipment, spare parts, weapon maintenance equipment, and associated material. In addition, regulations have caused a dramatic increase in the need for information required to support the handling and disposition of these parts and materials. In the past, huge inventories of classified weapon components were required to have long-term storage at Sandia and at many other locations throughout the DoE Complex. These materials are placed in onsite storage unit due to classification issues and they may also contain radiological and/or hazardous components. Since no disposal options exist for this material, the only choice was long-term storage. Long-term storage is costly and somewhat problematic, requiring a secured storage area, monitoring, auditing, and presenting the potential for loss or theft of the material. Overall recycling rates for materials sent through the DSO process have enabled 70 to 80% of these components to be recycled. These components are made of high quality materials and once this material has been sanitized, the demand for the component metals for recycling efforts is very high. The DSO process for NGPF, classified components established the credibility of this technique for addressing the long-term storage requirements of the classified weapons component inventory. The success of this application has generated interest from other Sandia organizations and other locations throughout the complex. Other organizations are requesting the help of the DSO team and the DSO is responding to these requests by expanding its scope to include Work-for- Other projects. For example

  12. A New Qualitative Typology to Classify Treading Water Movement Patterns

    PubMed Central

    Schnitzler, Christophe; Button, Chris; Croft, James L.

    2015-01-01

    This study proposes a new qualitative typology that can be used to classify learners treading water into different skill-based categories. To establish the typology, 38 participants were videotaped while treading water and their movement patterns were qualitatively analyzed by two experienced biomechanists. 13 sport science students were then asked to classify eight of the original participants after watching a brief tutorial video about how to use the typology. To examine intra-rater consistency, each participant was presented in a random order three times. Generalizability (G) and Decision (D) studies were performed to estimate the importance variance due to rater, occasion, video and the interactions between them, and to determine the reliability of the raters’ answers. A typology of five general classes of coordination was defined amongst the original 38 participants. The G-study showed an accurate and reliable assessment of different pattern type, with a percentage of correct classification of 80.1%, an overall Fleiss’ Kappa coefficient K = 0.6, and an overall generalizability φ coefficient of 0.99. This study showed that the new typology proposed to characterize the behaviour of individuals treading water was both accurate and highly reliable. Movement pattern classification using the typology might help practitioners distinguish between different skill-based behaviours and potentially guide instruction of key aquatic survival skills. Key points Treading water behavioral adaptation can be classified along two dimensions: the type of force created (drag vs lift), and the frequency of the force impulses Based on these concepts, 9 behavioral types can be identified, providing the basis for a typology Provided with macroscopic descriptors (movements of the limb relative to the water, and synchronous vs asynchronous movements), analysts can characterize behavioral type accurately and reliably. PMID:26336339

  13. Novel hybrid classified vector quantization using discrete cosine transform for image compression

    NASA Astrophysics Data System (ADS)

    Al-Fayadh, Ali; Hussain, Abir Jaafar; Lisboa, Paulo; Al-Jumeily, Dhiya

    2009-04-01

    We present a novel image compression technique using a classified vector Quantizer and singular value decomposition for the efficient representation of still images. The proposed method is called hybrid classified vector quantization. It involves a simple but efficient classifier-based gradient method in the spatial domain, which employs only one threshold to determine the class of the input image block, and uses three AC coefficients of discrete cosine transform coefficients to determine the orientation of the block without employing any threshold. The proposed technique is benchmarked with each of the standard vector quantizers generated using the k-means algorithm, standard classified vector quantizer schemes, and JPEG-2000. Simulation results indicate that the proposed approach alleviates edge degradation and can reconstruct good visual quality images with higher peak signal-to-noise ratio than the benchmarked techniques, or be competitive with them.

  14. Learning accurate and concise naïve Bayes classifiers from attribute value taxonomies and data

    PubMed Central

    Kang, D.-K.; Silvescu, A.; Honavar, V.

    2009-01-01

    In many application domains, there is a need for learning algorithms that can effectively exploit attribute value taxonomies (AVT)—hierarchical groupings of attribute values—to learn compact, comprehensible and accurate classifiers from data—including data that are partially specified. This paper describes AVT-NBL, a natural generalization of the naïve Bayes learner (NBL), for learning classifiers from AVT and data. Our experimental results show that AVT-NBL is able to generate classifiers that are substantially more compact and more accurate than those produced by NBL on a broad range of data sets with different percentages of partially specified values. We also show that AVT-NBL is more efficient in its use of training data: AVT-NBL produces classifiers that outperform those produced by NBL using substantially fewer training examples. PMID:20351793

  15. Motion correction in MRI of the brain

    NASA Astrophysics Data System (ADS)

    Godenschweger, F.; Kägebein, U.; Stucht, D.; Yarach, U.; Sciarra, A.; Yakupov, R.; Lüsebrink, F.; Schulze, P.; Speck, O.

    2016-03-01

    Subject motion in MRI is a relevant problem in the daily clinical routine as well as in scientific studies. Since the beginning of clinical use of MRI, many research groups have developed methods to suppress or correct motion artefacts. This review focuses on rigid body motion correction of head and brain MRI and its application in diagnosis and research. It explains the sources and types of motion and related artefacts, classifies and describes existing techniques for motion detection, compensation and correction and lists established and experimental approaches. Retrospective motion correction modifies the MR image data during the reconstruction, while prospective motion correction performs an adaptive update of the data acquisition. Differences, benefits and drawbacks of different motion correction methods are discussed.

  16. Motion correction in MRI of the brain.

    PubMed

    Godenschweger, F; Kägebein, U; Stucht, D; Yarach, U; Sciarra, A; Yakupov, R; Lüsebrink, F; Schulze, P; Speck, O

    2016-03-01

    Subject motion in MRI is a relevant problem in the daily clinical routine as well as in scientific studies. Since the beginning of clinical use of MRI, many research groups have developed methods to suppress or correct motion artefacts. This review focuses on rigid body motion correction of head and brain MRI and its application in diagnosis and research. It explains the sources and types of motion and related artefacts, classifies and describes existing techniques for motion detection, compensation and correction and lists established and experimental approaches. Retrospective motion correction modifies the MR image data during the reconstruction, while prospective motion correction performs an adaptive update of the data acquisition. Differences, benefits and drawbacks of different motion correction methods are discussed.

  17. Integrating language models into classifiers for BCI communication: a review

    NASA Astrophysics Data System (ADS)

    Speier, W.; Arnold, C.; Pouratian, N.

    2016-06-01

    Objective. The present review systematically examines the integration of language models to improve classifier performance in brain–computer interface (BCI) communication systems. Approach. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Main results. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Significance. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.

  18. Integrating language models into classifiers for BCI communication: a review

    NASA Astrophysics Data System (ADS)

    Speier, W.; Arnold, C.; Pouratian, N.

    2016-06-01

    Objective. The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. Approach. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Main results. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Significance. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.

  19. Application of fusion algorithms for computer aided detection and classification of bottom mines to synthetic aperture sonar test data

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William C.

    2006-05-01

    Over the past several years, Raytheon Company has adapted its Computer Aided Detection/Computer-Aided Classification (CAD/CAC) algorithm to process side-scan sonar imagery taken in both the Very Shallow Water (VSW) and Shallow Water (SW) operating environments. This paper describes the further adaptation of this CAD/CAC algorithm to process Synthetic Aperture Sonar (SAS) image data taken by an Autonomous Underwater Vehicle (AUV). The tuning of the CAD/CAC algorithm for the vehicle's sonar is described, the resulting classifier performance is presented, and the fusion of the classifier outputs with those of another CAD/CAC processor is evaluated. The fusion algorithm accepts the classification confidence levels and associated contact locations from the different CAD/CAC algorithms, clusters the contacts based on the distance between their locations, and then declares a valid target when a clustered contact passes a prescribed fusion criterion. Three different fusion criteria are evaluated: the first based on thresholding the sum of the confidence factors for the clustered contacts, the second based on simple binary combinations of the multiple CAD/CAC processor outputs, and the third based on the Fisher Discriminant. The resulting performance of the three fusion algorithms is compared, and the overall performance benefit of a significant reduction of false alarms at high correct classification probabilities is quantified.

  20. Semantic Features for Classifying Referring Search Terms

    SciTech Connect

    May, Chandler J.; Henry, Michael J.; McGrath, Liam R.; Bell, Eric B.; Marshall, Eric J.; Gregory, Michelle L.

    2012-05-11

    When an internet user clicks on a result in a search engine, a request is submitted to the destination web server that includes a referrer field containing the search terms given by the user. Using this information, website owners can analyze the search terms leading to their websites to better understand their visitors needs. This work explores some of the features that can be used for classification-based analysis of such referring search terms. We present initial results for the example task of classifying HTTP requests countries of origin. A system that can accurately predict the country of origin from query text may be a valuable complement to IP lookup methods which are susceptible to the obfuscation of dereferrers or proxies. We suggest that the addition of semantic features improves classifier performance in this example application. We begin by looking at related work and presenting our approach. After describing initial experiments and results, we discuss paths forward for this work.

  1. Detection of Fundus Lesions Using Classifier Selection

    NASA Astrophysics Data System (ADS)

    Nagayoshi, Hiroto; Hiramatsu, Yoshitaka; Sako, Hiroshi; Himaga, Mitsutoshi; Kato, Satoshi

    A system for detecting fundus lesions caused by diabetic retinopathy from fundus images is being developed. The system can screen the images in advance in order to reduce the inspection workload on doctors. One of the difficulties that must be addressed in completing this system is how to remove false positives (which tend to arise near blood vessels) without decreasing the detection rate of lesions in other areas. To overcome this difficulty, we developed classifier selection according to the position of a candidate lesion, and we introduced new features that can distinguish true lesions from false positives. A system incorporating classifier selection and these new features was tested in experiments using 55 fundus images with some lesions and 223 images without lesions. The results of the experiments confirm the effectiveness of the proposed system, namely, degrees of sensitivity and specificity of 98% and 81%, respectively.

  2. Comparing cosmic web classifiers using information theory

    NASA Astrophysics Data System (ADS)

    Leclercq, Florent; Lavaux, Guilhem; Jasche, Jens; Wandelt, Benjamin

    2016-08-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  3. Classifying Land Cover Using Spectral Signature

    NASA Astrophysics Data System (ADS)

    Alawiye, F. S.

    2012-12-01

    Studying land cover has become increasingly important as countries try to overcome the destruction of wetlands; its impact on local climate due to seasonal variation, radiation balance, and deteriorating environmental quality. In this investigation, we have been studying the spectral signatures of the Jamaica Bay wetland area based on remotely sensed satellite input data from LANDSAT TM and ASTER. We applied various remote sensing techniques to generate classified land cover output maps. Our classifiers relied on input from both the remote sensing and in-situ spectral field data. Based upon spectral separability and data collected in the field, a supervised and unsupervised classification was carried out. First results suggest good agreement between the land cover units mapped and those observed in the field.

  4. An Efficient Pattern Matching Algorithm

    NASA Astrophysics Data System (ADS)

    Sleit, Azzam; Almobaideen, Wesam; Baarah, Aladdin H.; Abusitta, Adel H.

    In this study, we present an efficient algorithm for pattern matching based on the combination of hashing and search trees. The proposed solution is classified as an offline algorithm. Although, this study demonstrates the merits of the technique for text matching, it can be utilized for various forms of digital data including images, audio and video. The performance superiority of the proposed solution is validated analytically and experimentally.

  5. Chromatin States Accurately Classify Cell Differentiation Stages

    PubMed Central

    Larson, Jessica L.; Yuan, Guo-Cheng

    2012-01-01

    Gene expression is controlled by the concerted interactions between transcription factors and chromatin regulators. While recent studies have identified global chromatin state changes across cell-types, it remains unclear to what extent these changes are co-regulated during cell-differentiation. Here we present a comprehensive computational analysis by assembling a large dataset containing genome-wide occupancy information of 5 histone modifications in 27 human cell lines (including 24 normal and 3 cancer cell lines) obtained from the public domain, followed by independent analysis at three different representations. We classified the differentiation stage of a cell-type based on its genome-wide pattern of chromatin states, and found that our method was able to identify normal cell lines with nearly 100% accuracy. We then applied our model to classify the cancer cell lines and found that each can be unequivocally classified as differentiated cells. The differences can be in part explained by the differential activities of three regulatory modules associated with embryonic stem cells. We also found that the “hotspot” genes, whose chromatin states change dynamically in accordance to the differentiation stage, are not randomly distributed across the genome but tend to be embedded in multi-gene chromatin domains, and that specialized gene clusters tend to be embedded in stably occupied domains. PMID:22363642

  6. Unsupervised Pattern Classifier for Abnormality-Scaling of Vibration Features for Helicopter Gearbox Fault Diagnosis

    NASA Technical Reports Server (NTRS)

    Jammu, Vinay B.; Danai, Kourosh; Lewicki, David G.

    1996-01-01

    A new unsupervised pattern classifier is introduced for on-line detection of abnormality in features of vibration that are used for fault diagnosis of helicopter gearboxes. This classifier compares vibration features with their respective normal values and assigns them a value in (0, 1) to reflect their degree of abnormality. Therefore, the salient feature of this classifier is that it does not require feature values associated with faulty cases to identify abnormality. In order to cope with noise and changes in the operating conditions, an adaptation algorithm is incorporated that continually updates the normal values of the features. The proposed classifier is tested using experimental vibration features obtained from an OH-58A main rotor gearbox. The overall performance of this classifier is then evaluated by integrating the abnormality-scaled features for detection of faults. The fault detection results indicate that the performance of this classifier is comparable to the leading unsupervised neural networks: Kohonen's Feature Mapping and Adaptive Resonance Theory (AR72). This is significant considering that the independence of this classifier from fault-related features makes it uniquely suited to abnormality-scaling of vibration features for fault diagnosis.

  7. Improving Classification Performance by Integrating Multiple Classifiers Based on Landsat ™ Images: A Primary Study

    NASA Astrophysics Data System (ADS)

    Li, Xuecao; Liu, Xiaoping; Yu, Le; Gong, Peng

    2014-11-01

    Land use/cover change is crucial to many ecological and environmental issues. In this article, we presented a new approach to improve the classification performance of remotely sensed images based on a classifier ensemble scheme, which can be delineated as two procedures, namely ensemble learning and predictions combination. Bagging algorithm, which is a widely used ensemble approach, was employed in the first procedure through a bootstrapped sampling scheme to stabilize and improve the performance of single classifier. Then, in the second stage, predictions of different classifiers are combined through the scheme of Behaviour Knowledge Space (BKS). This classifier ensemble scheme was examined using a Landsat Thematic Mapper (TM) image acquired at 2 January, 2009 in Dongguan (China). The experimental results illustrate the final output (BKS, OA=90.83% and Kappa=0.881) is outperformed not only the best single classifier (SVM, OA=88.83% and Kappa=0.8624) but also the Bagging CART classifier (OA=90.26% and Kappa=0.8808), although the improvements are varying among them. We think the classifier ensemble scheme can mitigate the limitation of some single models.

  8. Preliminary investigation on CAD system update: effect of selection of new cases on classifier performance

    NASA Astrophysics Data System (ADS)

    Muramatsu, Chisako; Nishimura, Kohei; Hara, Takeshi; Fujita, Hiroshi

    2013-02-01

    When a computer-aided diagnosis (CAD) system is used in clinical practice, it is desirable that the system is constantly and automatically updated with new cases obtained for performance improvement. In this study, the effect of different case selection methods for the system updates was investigated. For the simulation, the data for classification of benign and malignant masses on mammograms were used. Six image features were used for training three classifiers: linear discriminant analysis (LDA), support vector machine (SVM), and k-nearest neighbors (kNN). Three datasets, including dataset I for initial training of the classifiers, dataset T for intermediate testing and retraining, and dataset E for evaluating the classifiers, were randomly sampled from the database. As a result of intermediate testing, some cases from dataset T were selected to be added to the previous training set in the classifier updates. In each update, cases were selected using 4 methods: selection of (a) correctly classified samples, (b) incorrectly classified samples, (c) marginally classified samples, and (d) random samples. For comparison, system updates using all samples in dataset T were also evaluated. In general, the average areas under the receiver operating characteristic curves (AUCs) were almost unchanged with method (a), whereas AUCs generally degraded with method (b). The AUCs were improved with method (c) and (d), although use of all available cases generally provided the best or nearly best AUCs. In conclusion, CAD systems may be improved by retraining with new cases accumulated during practice.

  9. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  10. Pour une methodologie de l'investigation en phonetique corrective (Towards a Methodology of Investigation into Corrective Phonetics)

    ERIC Educational Resources Information Center

    Intravaia, Pietro

    1976-01-01

    Proposes a scientific methodology for investigation into corrective phonetics, making use of a phonologic roster to classify foreign realizations according to a linguistic pertinence criterion. (Text is in French.) (AM)

  11. Autofocus correction of excessive migration in synthetic aperture radar images.

    SciTech Connect

    Doerry, Armin Walter

    2004-09-01

    When residual range migration due to either real or apparent motion errors exceeds the range resolution, conventional autofocus algorithms fail. A new migration-correction autofocus algorithm has been developed that estimates the migration and applies phase and frequency corrections to properly focus the image.

  12. Testing and Validating Machine Learning Classifiers by Metamorphic Testing.

    PubMed

    Xie, Xiaoyuan; Ho, Joshua W K; Murphy, Christian; Kaiser, Gail; Xu, Baowen; Chen, Tsong Yueh

    2011-04-01

    Machine Learning algorithms have provided core functionality to many application domains - such as bioinformatics, computational linguistics, etc. However, it is difficult to detect faults in such applications because often there is no "test oracle" to verify the correctness of the computed outputs. To help address the software quality, in this paper we present a technique for testing the implementations of machine learning classification algorithms which support such applications. Our approach is based on the technique "metamorphic testing", which has been shown to be effective to alleviate the oracle problem. Also presented include a case study on a real-world machine learning application framework, and a discussion of how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also conduct mutation analysis and cross-validation, which reveal that our method has high effectiveness in killing mutants, and that observing expected cross-validation result alone is not sufficiently effective to detect faults in a supervised classification program. The effectiveness of metamorphic testing is further confirmed by the detection of real faults in a popular open-source classification program.

  13. A new algorithm for attitude-independent magnetometer calibration

    NASA Technical Reports Server (NTRS)

    Alonso, Roberto; Shuster, Malcolm D.

    1994-01-01

    A new algorithm is developed for inflight magnetometer bias determination without knowledge of the attitude. This algorithm combines the fast convergence of a heuristic algorithm currently in use with the correct treatment of the statistics and without discarding data. The algorithm performance is examined using simulated data and compared with previous algorithms.

  14. Using color histograms and SPA-LDA to classify bacteria.

    PubMed

    de Almeida, Valber Elias; da Costa, Gean Bezerra; de Sousa Fernandes, David Douglas; Gonçalves Dias Diniz, Paulo Henrique; Brandão, Deysiane; de Medeiros, Ana Claudia Dantas; Véras, Germano

    2014-09-01

    In this work, a new approach is proposed to verify the differentiating characteristics of five bacteria (Escherichia coli, Enterococcus faecalis, Streptococcus salivarius, Streptococcus oralis, and Staphylococcus aureus) by using digital images obtained with a simple webcam and variable selection by the Successive Projections Algorithm associated with Linear Discriminant Analysis (SPA-LDA). In this sense, color histograms in the red-green-blue (RGB), hue-saturation-value (HSV), and grayscale channels and their combinations were used as input data, and statistically evaluated by using different multivariate classifiers (Soft Independent Modeling by Class Analogy (SIMCA), Principal Component Analysis-Linear Discriminant Analysis (PCA-LDA), Partial Least Squares Discriminant Analysis (PLS-DA) and Successive Projections Algorithm-Linear Discriminant Analysis (SPA-LDA)). The bacteria strains were cultivated in a nutritive blood agar base layer for 24 h by following the Brazilian Pharmacopoeia, maintaining the status of cell growth and the nature of nutrient solutions under the same conditions. The best result in classification was obtained by using RGB and SPA-LDA, which reached 94 and 100 % of classification accuracy in the training and test sets, respectively. This result is extremely positive from the viewpoint of routine clinical analyses, because it avoids bacterial identification based on phenotypic identification of the causative organism using Gram staining, culture, and biochemical proofs. Therefore, the proposed method presents inherent advantages, promoting a simpler, faster, and low-cost alternative for bacterial identification.

  15. Using color histograms and SPA-LDA to classify bacteria.

    PubMed

    de Almeida, Valber Elias; da Costa, Gean Bezerra; de Sousa Fernandes, David Douglas; Gonçalves Dias Diniz, Paulo Henrique; Brandão, Deysiane; de Medeiros, Ana Claudia Dantas; Véras, Germano

    2014-09-01

    In this work, a new approach is proposed to verify the differentiating characteristics of five bacteria (Escherichia coli, Enterococcus faecalis, Streptococcus salivarius, Streptococcus oralis, and Staphylococcus aureus) by using digital images obtained with a simple webcam and variable selection by the Successive Projections Algorithm associated with Linear Discriminant Analysis (SPA-LDA). In this sense, color histograms in the red-green-blue (RGB), hue-saturation-value (HSV), and grayscale channels and their combinations were used as input data, and statistically evaluated by using different multivariate classifiers (Soft Independent Modeling by Class Analogy (SIMCA), Principal Component Analysis-Linear Discriminant Analysis (PCA-LDA), Partial Least Squares Discriminant Analysis (PLS-DA) and Successive Projections Algorithm-Linear Discriminant Analysis (SPA-LDA)). The bacteria strains were cultivated in a nutritive blood agar base layer for 24 h by following the Brazilian Pharmacopoeia, maintaining the status of cell growth and the nature of nutrient solutions under the same conditions. The best result in classification was obtained by using RGB and SPA-LDA, which reached 94 and 100 % of classification accuracy in the training and test sets, respectively. This result is extremely positive from the viewpoint of routine clinical analyses, because it avoids bacterial identification based on phenotypic identification of the causative organism using Gram staining, culture, and biochemical proofs. Therefore, the proposed method presents inherent advantages, promoting a simpler, faster, and low-cost alternative for bacterial identification. PMID:25023972

  16. Harmony Search Algorithm for Word Sense Disambiguation

    PubMed Central

    Abed, Saad Adnan; Tiun, Sabrina; Omar, Nazlia

    2015-01-01

    Word Sense Disambiguation (WSD) is the task of determining which sense of an ambiguous word (word with multiple meanings) is chosen in a particular use of that word, by considering its context. A sentence is considered ambiguous if it contains ambiguous word(s). Practically, any sentence that has been classified as ambiguous usually has multiple interpretations, but just one of them presents the correct interpretation. We propose an unsupervised method that exploits knowledge based approaches for word sense disambiguation using Harmony Search Algorithm (HSA) based on a Stanford dependencies generator (HSDG). The role of the dependency generator is to parse sentences to obtain their dependency relations. Whereas, the goal of using the HSA is to maximize the overall semantic similarity of the set of parsed words. HSA invokes a combination of semantic similarity and relatedness measurements, i.e., Jiang and Conrath (jcn) and an adapted Lesk algorithm, to perform the HSA fitness function. Our proposed method was experimented on benchmark datasets, which yielded results comparable to the state-of-the-art WSD methods. In order to evaluate the effectiveness of the dependency generator, we perform the same methodology without the parser, but with a window of words. The empirical results demonstrate that the proposed method is able to produce effective solutions for most instances of the datasets used. PMID:26422368

  17. Evaluating EMG Feature and Classifier Selection for Application to Partial-Hand Prosthesis Control

    PubMed Central

    Adewuyi, Adenike A.; Hargrove, Levi J.; Kuiken, Todd A.

    2016-01-01

    Pattern recognition-based myoelectric control of upper-limb prostheses has the potential to restore control of multiple degrees of freedom. Though this control method has been extensively studied in individuals with higher-level amputations, few studies have investigated its effectiveness for individuals with partial-hand amputations. Most partial-hand amputees retain a functional wrist and the ability of pattern recognition-based methods to correctly classify hand motions from different wrist positions is not well studied. In this study, focusing on partial-hand amputees, we evaluate (1) the performance of non-linear and linear pattern recognition algorithms and (2) the performance of optimal EMG feature subsets for classification of four hand motion classes in different wrist positions for 16 non-amputees and 4 amputees. Our results show that linear discriminant analysis and linear and non-linear artificial neural networks perform significantly better than the quadratic discriminant analysis for both non-amputees and partial-hand amputees. For amputees, including information from multiple wrist positions significantly decreased error (p < 0.001) but no further significant decrease in error occurred when more than 4, 2, or 3 positions were included for the extrinsic (p = 0.07), intrinsic (p = 0.06), or combined extrinsic and intrinsic muscle EMG (p = 0.08), respectively. Finally, we found that a feature set determined by selecting optimal features from each channel outperformed the commonly used time domain (p < 0.001) and time domain/autoregressive feature sets (p < 0.01). This method can be used as a screening filter to select the features from each channel that provide the best classification of hand postures across different wrist positions. PMID:27807418

  18. Classifier-guided sampling for discrete variable, discontinuous design space exploration: Convergence and computational performance

    SciTech Connect

    Backlund, Peter B.; Shahan, David W.; Seepersad, Carolyn Conner

    2014-04-22

    A classifier-guided sampling (CGS) method is introduced for solving engineering design optimization problems with discrete and/or continuous variables and continuous and/or discontinuous responses. The method merges concepts from metamodel-guided sampling and population-based optimization algorithms. The CGS method uses a Bayesian network classifier for predicting the performance of new designs based on a set of known observations or training points. Unlike most metamodeling techniques, however, the classifier assigns a categorical class label to a new design, rather than predicting the resulting response in continuous space, and thereby accommodates nondifferentiable and discontinuous functions of discrete or categorical variables. The CGS method uses these classifiers to guide a population-based sampling process towards combinations of discrete and/or continuous variable values with a high probability of yielding preferred performance. Accordingly, the CGS method is appropriate for discrete/discontinuous design problems that are ill-suited for conventional metamodeling techniques and too computationally expensive to be solved by population-based algorithms alone. In addition, the rates of convergence and computational properties of the CGS method are investigated when applied to a set of discrete variable optimization problems. Results show that the CGS method significantly improves the rate of convergence towards known global optima, on average, when compared to genetic algorithms.

  19. Classifier-guided sampling for discrete variable, discontinuous design space exploration: Convergence and computational performance

    NASA Astrophysics Data System (ADS)

    Backlund, Peter B.; Shahan, David W.; Conner Seepersad, Carolyn

    2015-05-01

    A classifier-guided sampling (CGS) method is introduced for solving engineering design optimization problems with discrete and/or continuous variables and continuous and/or discontinuous responses. The method merges concepts from metamodel-guided sampling and population-based optimization algorithms. The CGS method uses a Bayesian network classifier for predicting the performance of new designs based on a set of known observations or training points. Unlike most metamodelling techniques, however, the classifier assigns a categorical class label to a new design, rather than predicting the resulting response in continuous space, and thereby accommodates non-differentiable and discontinuous functions of discrete or categorical variables. The CGS method uses these classifiers to guide a population-based sampling process towards combinations of discrete and/or continuous variable values with a high probability of yielding preferred performance. Accordingly, the CGS method is appropriate for discrete/discontinuous design problems that are ill suited for conventional metamodelling techniques and too computationally expensive to be solved by population-based algorithms alone. The rates of convergence and computational properties of the CGS method are investigated when applied to a set of discrete variable optimization problems. Results show that the CGS method significantly improves the rate of convergence towards known global optima, on average, compared with genetic algorithms.

  20. Massively Multi-core Acceleration of a Document-Similarity Classifier to Detect Web Attacks

    SciTech Connect

    Ulmer, C; Gokhale, M; Top, P; Gallagher, B; Eliassi-Rad, T

    2010-01-14

    This paper describes our approach to adapting a text document similarity classifier based on the Term Frequency Inverse Document Frequency (TFIDF) metric to two massively multi-core hardware platforms. The TFIDF classifier is used to detect web attacks in HTTP data. In our parallel hardware approaches, we design streaming, real time classifiers by simplifying the sequential algorithm and manipulating the classifier's model to allow decision information to be represented compactly. Parallel implementations on the Tilera 64-core System on Chip and the Xilinx Virtex 5-LX FPGA are presented. For the Tilera, we employ a reduced state machine to recognize dictionary terms without requiring explicit tokenization, and achieve throughput of 37MB/s at slightly reduced accuracy. For the FPGA, we have developed a set of software tools to help automate the process of converting training data to synthesizable hardware and to provide a means of trading off between accuracy and resource utilization. The Xilinx Virtex 5-LX implementation requires 0.2% of the memory used by the original algorithm. At 166MB/s (80X the software) the hardware implementation is able to achieve Gigabit network throughput at the same accuracy as the original algorithm.

  1. Identifying organic-rich Marcellus Shale lithofacies by support vector machine classifier in the Appalachian basin

    NASA Astrophysics Data System (ADS)

    Wang, Guochang; Carr, Timothy R.; Ju, Yiwen; Li, Chaofeng

    2014-03-01

    Unconventional shale reservoirs as the result of extremely low matrix permeability, higher potential gas productivity requires not only sufficient gas-in-place, but also a high concentration of brittle minerals (silica and/or carbonate) that is amenable to hydraulic fracturing. Shale lithofacies is primarily defined by mineral composition and organic matter richness, and its representation as a 3-D model has advantages in recognizing productive zones of shale-gas reservoirs, designing horizontal wells and stimulation strategy, and aiding in understanding depositional process of organic-rich shale. A challenging and key step is to effectively recognize shale lithofacies from well conventional logs, where the relationship is very complex and nonlinear. In the recognition of shale lithofacies, the application of support vector machine (SVM), which underlies statistical learning theory and structural risk minimization principle, is superior to the traditional empirical risk minimization principle employed by artificial neural network (ANN). We propose SVM classifier combined with learning algorithms, such as grid searching, genetic algorithm and particle swarm optimization, and various kernel functions the approach to identify Marcellus Shale lithofacies. Compared with ANN classifiers, the experimental results of SVM classifiers showed higher cross-validation accuracy, better stability and less computational time cost. The SVM classifier with radius basis function as kernel worked best as it is trained by particle swarm optimization. The lithofacies predicted using the SVM classifier are used to build a 3-D Marcellus Shale lithofacies model, which assists in identifying higher productive zones, especially with thermal maturity and natural fractures.

  2. The Online Vehicle Type Classifier Design for Road-Side Radar Detectors

    NASA Astrophysics Data System (ADS)

    Jou, Yow-Jen; Chen, Yu-Kuang

    2009-08-01

    This study presents an online vehicle type classifier for road-side radar detectors in multi-lane environments. An automatic learning framework which composes a parametric statistic model and algorithms is introduced. The parameters of an online vehicle type classifier are trained with vehicles passing in front of detectors. The online vehicle type classifier tries to identify the vehicle type in real time. The road-side radar detector is developed based on frequency-modulation continuous-wave (FMCW) radar with the carrier frequency at X-band. Vehicles are classified into two major categories, large and small. The classification based on (i) average energy maximum and (ii) average energy variance, that are extracted from the frequency-domain signatures caused by passed vehicles. A two-dimension Gaussian Mixed Model (denoted as GMM) is employed to develop the learning model. Expectation maximization (denoted EM) algorithm is implemented to obtain the parameters of GMM. Numerical examples are demonstrated with real-world experiments. In the field tests, the automatic framework delivers an accuracy of minimum 88%, even with extremes scenarios (including (i) small samples and (ii) large sample size difference of different vehicle types). The examples show satisfying results of the proposed online vehicle type classifier.

  3. Distance weighted 'inside disc' classifier for computer-aided diagnosis of colonic polyps

    NASA Astrophysics Data System (ADS)

    Hu, Yifan; Song, Bowen; Pickhardt, Perry J.; Liang, Zhengrong

    2015-03-01

    Feature classification plays an important role in computer-aided diagnosis (CADx) of suspicious lesions or polyps in this concerned study. As one of the simplest machine learning algorithms, the k-nearest neighbor (k-NN) classifier has been widely used in many classification problems. However, the k-NN classifier has a drawback that the majority classes will dominate the prediction of a new sample. To mitigate this drawback, efforts have been devoted to set weight on each neighbor to avoid the influence of the "majority" classes. As a result, various weighted or wk-NN strategies have been explored. In this paper, we explored an alternative strategy, called "distance weighted inside disc" (DWID) classifier, which is different from the k-NN and wk-NN by such a way that it classifies the test point by assigning a corresponding label (instead a weight) with consideration of only those points inside the disc whose center is the test point instead of the k-nearest points. We evaluated this new DWID classifier with comparison to the k-NN, wk-NN, support vector machine (SVM) and random forest (RF) classifiers by experiments on a database of 153 polyps, including 116 neoplastic (malignance) polyps and 37 hyperplastic (benign) polyps, in terms of CADx or differentiation of benign from malignancy. The evaluation outcomes were documented quantitatively by the Receiver Operating Characteristics (ROC) analysis and the merit of area under the ROC curve (AUC), which is a well-established evaluation criterion to various classifiers. The results showed noticeable gain on the polyp differentiation by this new classifier according to the AUC values, as compared to the k-NN and wk-NN, as well as the SVM and RF. In the meantime, this new classifier also showed a noticeable reduction of computing time.

  4. Classifiers as Count Syntax: Individuation and Measurement in the Acquisition of Mandarin Chinese

    PubMed Central

    Li, Peggy; Barner, David; Huang, Becky H.

    2009-01-01

    The distinction between mass nouns (e.g., butter) and count nouns (e.g., table) offers a test case for asking how the syntax and semantics of natural language are related, and how children exploit syntax-semantics mappings when acquiring language. Virtually no studies have examined this distinction in classifier languages (e.g., Mandarin Chinese) due to the widespread assumption that such languages lack mass-count syntax. However, Cheng and Sybesma (1998) argue that Mandarin encodes the mass-count at the classifier level: classifiers can be categorized as “mass-classifiers” or “count-classifiers.” Mass and count classifiers differ in semantic interpretation and occur in different syntactic constructions. The current study is first an empirical test of Cheng and Sybesma’s hypothesis, and second, a test of the acquisition of putative mass and count classifiers by children learning Mandarin. Experiments 1 and 2 asked whether count-classifiers select individuals and mass classifiers select nonindividuals and sets of individuals. Adult Mandarin-speakers indeed showed this pattern of interpretation, while 4- to 6-year-olds had not fully mastered the distinction. Experiment 3 tested participants’ syntactic sensitivity by asking them to match two syntactic constructions (one that supported the mass or portion reading and one that did not) to two contrasting choices (a portion of an object and a whole object). A developmental trend was observed for the syntactic knowledge from 4-year-old children into adulthood: adults were near perfect and the older children were more likely than the younger children to correctly match the contrasting phrases to the objects. Thus, in three experiments we find support for Cheng and Sybesma’s analysis, but also find that children master the syntax and semantics of Mandarin classifiers much later than English-speaking children acquire knowledge of the English mass-count distinction. PMID:20151047

  5. 70. PRIMARY MILL AND CLASSIFIER No. 2 FROM NORTHWEST. MILL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    70. PRIMARY MILL AND CLASSIFIER No. 2 FROM NORTHWEST. MILL DISCHARGED INTO LAUNDER WHICH PIERCED THE SIDE OF THE CLASSIFIER PAN. WOOD LAUNDER WITHIN CLASSIFIER VISIBLE (FILLED WITH DEBRIS). HORIZONTAL WOOD PLANKING BEHIND MILL IS FEED BOX. MILL SOLUTION PIPING RUNS ALONG BASE OF WEST SIDE OF CLASSIFIER. - Bald Mountain Gold Mill, Nevada Gulch at head of False Bottom Creek, Lead, Lawrence County, SD

  6. Cross-classified occupational exposure data.

    PubMed

    Jones, Rachael M; Burstyn, Igor

    2016-09-01

    We demonstrate the regression analysis of exposure determinants using cross-classified random effects in the context of lead exposures resulting from blasting surfaces in advance of painting. We had three specific objectives for analysis of the lead data, and observed: (1) high within-worker variability in personal lead exposures, explaining 79% of variability; (2) that the lead concentration outside of half-mask respirators was 2.4-fold higher than inside supplied-air blasting helmets, suggesting that the exposure reduction by blasting helmets may be lower than expected by the Assigned Protection Factor; and (3) that lead concentrations at fixed area locations in containment were not associated with personal lead exposures. In addition, we found that, on average, lead exposures among workers performing blasting and other activities was 40% lower than among workers performing only blasting. In the process of obtaining these analyses objectives, we determined that the data were non-hierarchical: repeated exposure measurements were collected for a worker while the worker was a member of several groups, or cross-classified among groups. Since the worker is a member of multiple groups, the exposure data do not adhere to the traditionally assumed hierarchical structure. Forcing a hierarchical structure on these data led to similar within-group and between-group variability, but decreased precision in the estimate of effect of work activity on lead exposure. We hope hygienists and exposure assessors will consider non-hierarchical models in the design and analysis of exposure assessments. PMID:27029937

  7. Mercury⊕: An evidential reasoning image classifier

    NASA Astrophysics Data System (ADS)

    Peddle, Derek R.

    1995-12-01

    MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.

  8. Classifying multispectral data by neural networks

    NASA Technical Reports Server (NTRS)

    Telfer, Brian A.; Szu, Harold H.; Kiang, Richard K.

    1993-01-01

    Several energy functions for synthesizing neural networks are tested on 2-D synthetic data and on Landsat-4 Thematic Mapper data. These new energy functions, designed specifically for minimizing misclassification error, in some cases yield significant improvements in classification accuracy over the standard least mean squares energy function. In addition to operating on networks with one output unit per class, a new energy function is tested for binary encoded outputs, which result in smaller network sizes. The Thematic Mapper data (four bands were used) is classified on a single pixel basis, to provide a starting benchmark against which further improvements will be measured. Improvements are underway to make use of both subpixel and superpixel (i.e. contextual or neighborhood) information in tile processing. For single pixel classification, the best neural network result is 78.7 percent, compared with 71.7 percent for a classical nearest neighbor classifier. The 78.7 percent result also improves on several earlier neural network results on this data.

  9. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  10. 77 FR 72199 - Technical Corrections; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ...) is correcting a final rule that was published in the Federal Register on July 6, 2012 (77 FR 39899), and effective on August 6, 2012. That final rule amended the NRC regulations to make technical... COMMISSION 10 CFR Part 171 RIN 3150-AJ16 Technical Corrections; Correction AGENCY: Nuclear...

  11. A convolutional neural network neutrino event classifier

    DOE PAGES

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  12. A convolutional neural network neutrino event classifier

    NASA Astrophysics Data System (ADS)

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  13. Using naive Bayes classifier for classification of convective rainfall intensities based on spectral characteristics retrieved from SEVIRI

    NASA Astrophysics Data System (ADS)

    Hameg, Slimane; Lazri, Mourad; Ameur, Soltane

    2016-07-01

    This paper presents a new algorithm to classify convective clouds and determine their intensity, based on cloud physical properties retrieved from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). The convective rainfall events at 15 min, 4 × 5 km spatial resolution from 2006 to 2012 are analysed over northern Algeria. The convective rain classification methodology makes use of the relationship between cloud spectral characteristics and cloud physical properties such as cloud water path (CWP), cloud phase (CP) and cloud top height (CTH). For this classification, a statistical method based on `naive Bayes classifier' is applied. This is a simple probabilistic classifier based on applying `Bayes' theorem with strong (naive) independent assumptions. For a 9-month period, the ability of SEVIRI to classify the rainfall intensity in the convective clouds is evaluated using weather radar over the northern Algeria. The results indicate an encouraging performance of the new algorithm for intensity differentiation of convective clouds using SEVIRI data.

  14. Classification accuracy of algorithms for blood chemistry data for three aquaculture-affected marine fish species.

    PubMed

    Coz-Rakovac, R; Topic Popovic, N; Smuc, T; Strunjak-Perovic, I; Jadan, M

    2009-11-01

    The objective of this study was determination and discrimination of biochemical data among three aquaculture-affected marine fish species (sea bass, Dicentrarchus labrax; sea bream, Sparus aurata L., and mullet, Mugil spp.) based on machine-learning methods. The approach relying on machine-learning methods gives more usable classification solutions and provides better insight into the collected data. So far, these new methods have been applied to the problem of discrimination of blood chemistry data with respect to season and feed of a single species. This is the first time these classification algorithms have been used as a framework for rapid differentiation among three fish species. Among the machine-learning methods used, decision trees provided the clearest model, which correctly classified 210 samples or 85.71%, and incorrectly classified 35 samples or 14.29% and clearly identified three investigated species from their biochemical traits.

  15. A machine learned classifier for RR Lyrae in the VVV survey

    NASA Astrophysics Data System (ADS)

    Elorrieta, Felipe; Eyheramendy, Susana; Jordán, Andrés; Dékány, István; Catelan, Márcio; Angeloni, Rodolfo; Alonso-García, Javier; Contreras-Ramos, Rodrigo; Gran, Felipe; Hajdu, Gergely; Espinoza, Néstor; Saito, Roberto K.; Minniti, Dante

    2016-11-01

    Variable stars of RR Lyrae type are a prime tool with which to obtain distances to old stellar populations in the Milky Way. One of the main aims of the Vista Variables in the Via Lactea (VVV) near-infrared survey is to use them to map the structure of the Galactic Bulge. Owing to the large number of expected sources, this requires an automated mechanism for selecting RR Lyrae, and particularly those of the more easily recognized type ab (i.e., fundamental-mode pulsators), from the 106-107 variables expected in the VVV survey area. In this work we describe a supervised machine-learned classifier constructed for assigning a score to a Ks-band VVV light curve that indicates its likelihood of being ab-type RR Lyrae. We describe the key steps in the construction of the classifier, which were the choice of features, training set, selection of aperture, and family of classifiers. We find that the AdaBoost family of classifiers give consistently the best performance for our problem, and obtain a classifier based on the AdaBoost algorithm that achieves a harmonic mean between false positives and false negatives of ≈7% for typical VVV light-curve sets. This performance is estimated using cross-validation and through the comparison to two independent datasets that were classified by human experts.

  16. Using Statistical Techniques and Web Search to Correct ESL Errors

    ERIC Educational Resources Information Center

    Gamon, Michael; Leacock, Claudia; Brockett, Chris; Dolan, William B.; Gao, Jianfeng; Belenko, Dmitriy; Klementiev, Alexandre

    2009-01-01

    In this paper we present a system for automatic correction of errors made by learners of English. The system has two novel aspects. First, machine-learned classifiers trained on large amounts of native data and a very large language model are combined to optimize the precision of suggested corrections. Second, the user can access real-life web…

  17. moco: Fast Motion Correction for Calcium Imaging.

    PubMed

    Dubbs, Alexander; Guevara, James; Yuste, Rafael

    2016-01-01

    Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L 2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ. PMID:26909035

  18. moco: Fast Motion Correction for Calcium Imaging

    PubMed Central

    Dubbs, Alexander; Guevara, James; Yuste, Rafael

    2016-01-01

    Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ. PMID:26909035

  19. moco: Fast Motion Correction for Calcium Imaging.

    PubMed

    Dubbs, Alexander; Guevara, James; Yuste, Rafael

    2016-01-01

    Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L 2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ.

  20. New algorithm for nonlinear vector-based upconversion with center weighted medians

    NASA Astrophysics Data System (ADS)

    Blume, Holger

    1997-07-01

    One important task in the field of digital video signal processing is the conversion of one standard into another with different field and scan rates. Therefore a new vector-based nonlinear upconversion algorithm has been developed that applies nonlinear center weighted median filters (CWM). Assuming a two channel model of the human visual system with different spatio-temporal characteristics, there are contrary demands for the CWM filters. One can meet these demands by a vertical band separation and an application of so-called temporally and spatially dominated CWMs. By this means, interpolation errors of the separated channels can be compensated by an adequate splitting of the spectrum. Therefore a very robust vector error tolerant upconversion method can be achieved, which significantly improves the interpolation quality. By an appropriate choice of the CWM filter root structures, main picture elements are interpolated correctly even if faulty vector fields occur. To demonstrate the correctness of the deduced interpolation scheme, picture content is classified. These classes are distinguished by correct or incorrect vector assignment and correlated or noncorrelated picture content. The mode of operation of the new algorithm is portrayed for each class. Whereas the mode of operation for correlated picture content can be shown by object models, this is shown for noncorrelated picture content by the probability distribution function of the applied CWM filters. The new algorithm has been verified by objective evaluation methods [peak signal to noise ratio, and subjective mean square error measurements] and by a comprehensive subjective test series.

  1. Evaluation and analysis of Seasat-A Scanning multichannel Microwave radiometer (SMMR) Antenna Pattern Correction (APC) algorithm. Sub-task 2: T sub B measured vs. T sub B calculated comparison results

    NASA Technical Reports Server (NTRS)

    Kitzis, J. L.; Kitzis, S. N.

    1979-01-01

    Interim Antenna Pattern Correction (APC) brightness temperature measurements for all ten SMMR channels are compared with calculated values generated from surface truth data. Plots and associated statistics are presented for the available points of coincidence between SMMR and surface truth measurements acquired for the Gulf of Alaska SEASAT Experiment. The most important conclusions of the study deal with the apparent existence of different instrument biases for each SMMR channel, and their variation across the scan.

  2. Validation of an Algorithm to Estimate Gestational Age in Electronic Health Plan Databases

    PubMed Central

    Li, Qian; Andrade, Susan E.; Cooper, William O.; Davis, Robert L.; Dublin, Sascha; Hammad, Tarek A.; Pawloski, Pamala A.; Pinheiro, Simone P.; Raebel, Marsha A.; Scott, Pamela E.; Smith, David H.; Dashevsky, Inna; Haffenreffer, Katie; Johnson, Karin E.; Toh, Sengwee

    2013-01-01

    Purpose To validate an algorithm that uses delivery date and diagnosis codes to define gestational age at birth in electronic health plan databases. Methods Using data from 225,384 live born deliveries among women aged 15–45 years in 2001–2007 within 8 of the 11 health plans participating in the Medication Exposure in Pregnancy Risk Evaluation Program, we compared 1) the algorithm-derived gestational age versus the “gold-standard” gestational age obtained from the infant birth certificate files; and 2) the prenatal exposure status of two antidepressants (fluoxetine and sertraline) and two antibiotics (amoxicillin and azithromycin) as determined by the algorithm-derived versus the gold-standard gestational age. Results The mean algorithm-derived gestational age at birth was lower than the mean obtained from the birth certificate files among singleton deliveries (267.9 versus 273.5 days) but not among multiple-gestation deliveries (253.9 versus 252.6 days). The algorithm-derived prenatal exposure to the antidepressants had a sensitivity and a positive predictive value (PPV) of ≥95%, and a specificity and a negative predictive value (NPV) of almost 100%. Sensitivity and PPV were both ≥90%, and specificity and NPV were both >99% for the antibiotics. Conclusions A gestational age algorithm based upon electronic health plan data correctly classified medication exposure status in most live born deliveries, but misclassification may be higher for drugs typically used for short durations. PMID:23335117

  3. A cognitive approach to classifying perceived behaviors

    NASA Astrophysics Data System (ADS)

    Benjamin, Dale Paul; Lyons, Damian

    2010-04-01

    This paper describes our work on integrating distributed, concurrent control in a cognitive architecture, and using it to classify perceived behaviors. We are implementing the Robot Schemas (RS) language in Soar. RS is a CSP-type programming language for robotics that controls a hierarchy of concurrently executing schemas. The behavior of every RS schema is defined using port automata. This provides precision to the semantics and also a constructive means of reasoning about the behavior and meaning of schemas. Our implementation uses Soar operators to build, instantiate and connect port automata as needed. Our approach is to use comprehension through generation (similar to NLSoar) to search for ways to construct port automata that model perceived behaviors. The generality of RS permits us to model dynamic, concurrent behaviors. A virtual world (Ogre) is used to test the accuracy of these automata. Soar's chunking mechanism is used to generalize and save these automata. In this way, the robot learns to recognize new behaviors.

  4. Classifying antiarrhythmic actions: by facts or speculation.

    PubMed

    Vaughan Williams, E M

    1992-11-01

    Classification of antiarrhythmic actions is reviewed in the context of the results of the Cardiac Arrhythmia Suppression Trials, CAST 1 and 2. Six criticisms of the classification recently published (The Sicilian Gambit) are discussed in detail. The alternative classification, when stripped of speculative elements, is shown to be similar to the original classification. Claims that the classification failed to predict the efficacy of antiarrhythmic drugs for the selection of appropriate therapy have been tested by an example. The antiarrhythmic actions of cibenzoline were classified in 1980. A detailed review of confirmatory experiments and clinical trials during the past decade shows that predictions made at the time agree with subsequent results. Classification of the effects drugs actually have on functioning cardiac tissues provides a rational basis for finding the preferred treatment for a particular arrhythmia in accordance with the diagnosis.

  5. Classifying prion and prion-like phenomena.

    PubMed

    Harbi, Djamel; Harrison, Paul M

    2014-01-01

    The universe of prion and prion-like phenomena has expanded significantly in the past several years. Here, we overview the challenges in classifying this data informatically, given that terms such as "prion-like", "prion-related" or "prion-forming" do not have a stable meaning in the scientific literature. We examine the spectrum of proteins that have been described in the literature as forming prions, and discuss how "prion" can have a range of meaning, with a strict definition being for demonstration of infection with in vitro-derived recombinant prions. We suggest that although prion/prion-like phenomena can largely be apportioned into a small number of broad groups dependent on the type of transmissibility evidence for them, as new phenomena are discovered in the coming years, a detailed ontological approach might be necessary that allows for subtle definition of different "flavors" of prion / prion-like phenomena.

  6. Corrective Jaw Surgery

    MedlinePlus

    ... and Craniofacial Surgery Cleft Lip/Palate and Craniofacial Surgery A cleft lip may require one or more ... find out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment ...

  7. Classifying supernovae using only galaxy data

    SciTech Connect

    Foley, Ryan J.; Mandel, Kaisey

    2013-12-01

    We present a new method for probabilistically classifying supernovae (SNe) without using SN spectral or photometric data. Unlike all previous studies to classify SNe without spectra, this technique does not use any SN photometry. Instead, the method relies on host-galaxy data. We build upon the well-known correlations between SN classes and host-galaxy properties, specifically that core-collapse SNe rarely occur in red, luminous, or early-type galaxies. Using the nearly spectroscopically complete Lick Observatory Supernova Search sample of SNe, we determine SN fractions as a function of host-galaxy properties. Using these data as inputs, we construct a Bayesian method for determining the probability that an SN is of a particular class. This method improves a common classification figure of merit by a factor of >2, comparable to the best light-curve classification techniques. Of the galaxy properties examined, morphology provides the most discriminating information. We further validate this method using SN samples from the Sloan Digital Sky Survey and the Palomar Transient Factory. We demonstrate that this method has wide-ranging applications, including separating different subclasses of SNe and determining the probability that an SN is of a particular class before photometry or even spectra can. Since this method uses completely independent data from light-curve techniques, there is potential to further improve the overall purity and completeness of SN samples and to test systematic biases of the light-curve techniques. Further enhancements to the host-galaxy method, including additional host-galaxy properties, combination with light-curve methods, and hybrid methods, should further improve the quality of SN samples from past, current, and future transient surveys.

  8. Evaluation of classifiers for processing Hyperion (EO-1) data of tropical vegetation

    NASA Astrophysics Data System (ADS)

    Vyas, Dhaval; Krishnayya, N. S. R.; Manjunath, K. R.; Ray, S. S.; Panigrahy, Sushma

    2011-04-01

    There is an urgent necessity to monitor changes in the natural surface features of earth. Compared to broadband multispectral data, hyperspectral data provides a better option with high spectral resolution. Classification of vegetation with the use of hyperspectral remote sensing generates a classical problem of high dimensional inputs. Complexity gets compounded as we move from airborne hyperspectral to Spaceborne technology. It is unclear how different classification algorithms will perform on a complex scene of tropical forests collected by spaceborne hyperspectral sensor. The present study was carried out to evaluate the performance of three different classifiers (Artificial Neural Network, Spectral Angle Mapper, Support Vector Machine) over highly diverse tropical forest vegetation utilizing hyperspectral (EO-1) data. Appropriate band selection was done by Stepwise Discriminant Analysis. The Stepwise Discriminant Analysis resulted in identifying 22 best bands to discriminate the eight identified tropical vegetation classes. Maximum numbers of bands came from SWIR region. ANN classifier gave highest OAA values of 81% with the help of 22 selected bands from SDA. The image classified with the help SVM showed OAA of 71%, whereas the SAM showed the lowest OAA of 66%. All the three classifiers were also tested to check their efficiency in classifying spectra coming from 165 processed bands. SVM showed highest OAA of 80%. Classified subset images coming from ANN (from 22 bands) and SVM (from 165 bands) are quite similar in showing the distribution of eight vegetation classes. Both the images appeared close to the actual distribution of vegetation seen in the study area. OAA levels obtained in this study by ANN and SVM classifiers identify the suitability of these classifiers for tropical vegetation discrimination.

  9. Design of radial basis function neural network classifier realized with the aid of data preprocessing techniques: design and analysis

    NASA Astrophysics Data System (ADS)

    Oh, Sung-Kwun; Kim, Wook-Dong; Pedrycz, Witold

    2016-05-01

    In this paper, we introduce a new architecture of optimized Radial Basis Function neural network classifier developed with the aid of fuzzy clustering and data preprocessing techniques and discuss its comprehensive design methodology. In the preprocessing part, the Linear Discriminant Analysis (LDA) or Principal Component Analysis (PCA) algorithm forms a front end of the network. The transformed data produced here are used as the inputs of the network. In the premise part, the Fuzzy C-Means (FCM) algorithm determines the receptive field associated with the condition part of the rules. The connection weights of the classifier are of functional nature and come as polynomial functions forming the consequent part. The Particle Swarm Optimization algorithm optimizes a number of essential parameters needed to improve the accuracy of the classifier. Those optimized parameters include the type of data preprocessing, the dimensionality of the feature vectors produced by the LDA (or PCA), the number of clusters (rules), the fuzzification coefficient used in the FCM algorithm and the orders of the polynomials of networks. The performance of the proposed classifier is reported for several benchmarking data-sets and is compared with the performance of other classifiers reported in the previous studies.

  10. Application of fusion algorithms for computer-aided detection and classification of bottom mines to shallow water test data from the battle space preparation autonomous underwater vehicle (BPAUV)

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William; Dobeck, Gerald J.

    2003-09-01

    Over the past several years, Raytheon Company has adapted its Computer Aided Detection/Computer-Aided Classification (CAD/CAC)algorithm to process side-scan sonar imagery taken in both the Very Shallow Water (VSW) and Shallow Water (SW) operating environments. This paper describes the further adaptation of this CAD/CAC algorithm to process SW side-scan image data taken by the Battle Space Preparation Autonomous Underwater Vehicle (BPAUV), a vehicle made by Bluefin Robotics. The tuning of the CAD/CAC algorithm for the vehicle's sonar is described, the resulting classifier performance is presented, and the fusion of the classifier outputs with those of three other CAD/CAC processors is evaluated. The fusion algorithm accepts the classification confidence levels and associated contact locations from the four different CAD/CAC algorithms, clusters the contacts based on the distance between their locations, and then declares a valid target when a clustered contact passes a prescribed fusion criterion. Four different fusion criteria are evaluated: the first based on thresholding the sum of the confidence factors for the clustered contacts, the second and third based on simple and constrained binary combinations of the multiple CAD/CAC processor outputs, and the fourth based on the Fisher Discriminant. The resulting performance of the four fusion algorithms is compared, and the overall performance benefit of a significant reduction of false alarms at high correct classification probabilities is quantified. The optimal Fisher fusion algorithm yields a 90% probability of correct classification at a false alarm probability of 0.0062 false alarms per image per side, a 34:1 reduction in false alarms relative to the best performing single CAD/CAC algorithm.

  11. Beam-hardening correction by a surface fitting and phase classification by a least square support vector machine approach for tomography images of geological samples

    NASA Astrophysics Data System (ADS)

    Khan, F.; Enzmann, F.; Kersten, M.

    2015-12-01

    In X-ray computed microtomography (μXCT) image processing is the most important operation prior to image analysis. Such processing mainly involves artefact reduction and image segmentation. We propose a new two-stage post-reconstruction procedure of an image of a geological rock core obtained by polychromatic cone-beam μXCT technology. In the first stage, the beam-hardening (BH) is removed applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. The final BH-corrected image is extracted from the residual data, or the difference between the surface elevation values and the original grey-scale values. For the second stage, we propose using a least square support vector machine (a non-linear classifier algorithm) to segment the BH-corrected data as a pixel-based multi-classification task. A combination of the two approaches was used to classify a complex multi-mineral rock sample. The Matlab code for this approach is provided in the Appendix. A minor drawback is that the proposed segmentation algorithm may become computationally demanding in the case of a high dimensional training data set.

  12. Techniques for classifying acoustic resonant spectra

    SciTech Connect

    Roberts, R.S.; Lewis, P.S.; Chen, J.T.; Vela, O.A.

    1995-12-31

    A second-generation nondestructive evaluation (NDE) system that discriminates between different types of chemical munitions is under development. The NDE system extracts features from the acoustic spectra of known munitions, builds templates from these features, and performs classification by comparing features extracted from an unknown munition to a template library. Improvements over first-generation feature extraction template construction and classification algorithms are reported. Results are presented on the performance of the system and a large data set collected from surrogate-filled munitions.

  13. TVFMCATS. Time Variant Floating Mean Counting Algorithm

    SciTech Connect

    Huffman, R.K.

    1999-05-01

    This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor`s hardware.

  14. Time Variant Floating Mean Counting Algorithm

    SciTech Connect

    Huffman, Russell Kevin

    1999-06-03

    This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor''s hardware.

  15. Building Ultra-Low False Alarm Rate Support Vector Classifier Ensembles Using Random Subspaces

    SciTech Connect

    Chen, B Y; Lemmond, T D; Hanley, W G

    2008-10-06

    This paper presents the Cost-Sensitive Random Subspace Support Vector Classifier (CS-RS-SVC), a new learning algorithm that combines random subspace sampling and bagging with Cost-Sensitive Support Vector Classifiers to more effectively address detection applications burdened by unequal misclassification requirements. When compared to its conventional, non-cost-sensitive counterpart on a two-class signal detection application, random subspace sampling is shown to very effectively leverage the additional flexibility offered by the Cost-Sensitive Support Vector Classifier, yielding a more than four-fold increase in the detection rate at a false alarm rate (FAR) of zero. Moreover, the CS-RS-SVC is shown to be fairly robust to constraints on the feature subspace dimensionality, enabling reductions in computation time of up to 82% with minimal performance degradation.

  16. National estuarine inventory: Classified shellfish growing waters by estuary. Final report

    SciTech Connect

    Broutman, M.A.; Leonard, D.L.

    1986-12-01

    The report is the first in a series of reports that compile information on classified shellfish waters as an indicator of coliform bacteria pollution in the Nation's estuaries. Data for the report have been derived from the 1985 National Shellfish Register. Although the Register has provided consistent data on acreage of classified shellfish waters by state, use of it as a national water-quality indicator has been hindered because of the influence of factors other than water quality on classification. The report improves the 1985 Register data by: (1) reorganizing data into 92 estuaries on the East, West, and Gulf coasts that comprise the National Estuarine Inventory, and (2) correcting data for areas that were classified for reasons other than water quality.

  17. Research on classified real-time flood forecasting framework based on K-means cluster and rough set.

    PubMed

    Xu, Wei; Peng, Yong

    2015-01-01

    This research presents a new classified real-time flood forecasting framework. In this framework, historical floods are classified by a K-means cluster according to the spatial and temporal distribution of precipitation, the time variance of precipitation intensity and other hydrological factors. Based on the classified results, a rough set is used to extract the identification rules for real-time flood forecasting. Then, the parameters of different categories within the conceptual hydrological model are calibrated using a genetic algorithm. In real-time forecasting, the corresponding category of parameters is selected for flood forecasting according to the obtained flood information. This research tests the new classified framework on Guanyinge Reservoir and compares the framework with the traditional flood forecasting method. It finds that the performance of the new classified framework is significantly better in terms of accuracy. Furthermore, the framework can be considered in a catchment with fewer historical floods. PMID:26442493

  18. Research on classified real-time flood forecasting framework based on K-means cluster and rough set.

    PubMed

    Xu, Wei; Peng, Yong

    2015-01-01

    This research presents a new classified real-time flood forecasting framework. In this framework, historical floods are classified by a K-means cluster according to the spatial and temporal distribution of precipitation, the time variance of precipitation intensity and other hydrological factors. Based on the classified results, a rough set is used to extract the identification rules for real-time flood forecasting. Then, the parameters of different categories within the conceptual hydrological model are calibrated using a genetic algorithm. In real-time forecasting, the corresponding category of parameters is selected for flood forecasting according to the obtained flood information. This research tests the new classified framework on Guanyinge Reservoir and compares the framework with the traditional flood forecasting method. It finds that the performance of the new classified framework is significantly better in terms of accuracy. Furthermore, the framework can be considered in a catchment with fewer historical floods.

  19. A decision support system using combined-classifier for high-speed data stream in smart grid

    NASA Astrophysics Data System (ADS)

    Yang, Hang; Li, Peng; He, Zhian; Guo, Xiaobin; Fong, Simon; Chen, Huajun

    2016-11-01

    Large volume of high-speed streaming data is generated by big power grids continuously. In order to detect and avoid power grid failure, decision support systems (DSSs) are commonly adopted in power grid enterprises. Among all the decision-making algorithms, incremental decision tree is the most widely used one. In this paper, we propose a combined classifier that is a composite of a cache-based classifier (CBC) and a main tree classifier (MTC). We integrate this classifier into a stream processing engine on top of the DSS such that high-speed steaming data can be transformed into operational intelligence efficiently. Experimental results show that our proposed classifier can return more accurate answers than other existing ones.

  20. Comparison of two classifier training methodologies for underwater mine detection/classification

    NASA Astrophysics Data System (ADS)

    Bello, Martin G.

    2001-10-01

    We describe here the current form of Alphatech's image processing and neural network based algorithms for detection and classification of mines in side-scan sonar imagery, and results obtained form their application to three distinct databases. In particular, we contrast here results obtained from the use of a currently employed 'baseline' multilayer perceptron classifier training approach, with the use of a state of the art commercial neural network package, NeuralSIM, developed by Neuralware, Inc.