Science.gov

Sample records for algorithm correctly classified

  1. Self-correcting 100-font classifier

    NASA Astrophysics Data System (ADS)

    Baird, Henry S.; Nagy, George

    1994-03-01

    We have developed a practical scheme to take advantage of local typeface homogeneity to improve the accuracy of a character classifier. Given a polyfont classifier which is capable of recognizing any of 100 typefaces moderately well, our method allows it to specialize itself automatically to the single -- but otherwise unknown -- typeface it is reading. Essentially, the classifier retrains itself after examining some of the images, guided at first by the preset classification boundaries of the given classifier, and later by the behavior of the retrained classifier. Experimental trials on 6.4 M pseudo-randomly distorted images show that the method improves on 95 of the 100 typefaces. It reduces the error rate by a factor of 2.5, averaged over 100 typefaces, when applied to an alphabet of 80 ASCII characters printed at ten point and digitized at 300 pixels/inch. This self-correcting method complements, and does not hinder, other methods for improving OCR accuracy, such as linguistic contextual analysis.

  2. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  3. Learning algorithms for stack filter classifiers

    SciTech Connect

    Porter, Reid B; Hush, Don; Zimmer, Beate G

    2009-01-01

    Stack Filters define a large class of increasing filter that is used widely in image and signal processing. The motivations for using an increasing filter instead of an unconstrained filter have been described as: (1) fast and efficient implementation, (2) the relationship to mathematical morphology and (3) more precise estimation with finite sample data. This last motivation is related to methods developed in machine learning and the relationship was explored in an earlier paper. In this paper we investigate this relationship by applying Stack Filters directly to classification problems. This provides a new perspective on how monotonicity constraints can help control estimation and approximation errors, and also suggests several new learning algorithms for Boolean function classifiers when they are applied to real-valued inputs.

  4. Error minimizing algorithms for nearest eighbor classifiers

    SciTech Connect

    Porter, Reid B; Hush, Don; Zimmer, G. Beate

    2011-01-03

    Stack Filters define a large class of discrete nonlinear filter first introd uced in image and signal processing for noise removal. In recent years we have suggested their application to classification problems, and investigated their relationship to other types of discrete classifiers such as Decision Trees. In this paper we focus on a continuous domain version of Stack Filter Classifiers which we call Ordered Hypothesis Machines (OHM), and investigate their relationship to Nearest Neighbor classifiers. We show that OHM classifiers provide a novel framework in which to train Nearest Neighbor type classifiers by minimizing empirical error based loss functions. We use the framework to investigate a new cost sensitive loss function that allows us to train a Nearest Neighbor type classifier for low false alarm rate applications. We report results on both synthetic data and real-world image data.

  5. Service Discovery Framework Supported by EM Algorithm and Bayesian Classifier

    NASA Astrophysics Data System (ADS)

    Peng, Yanbin

    Service oriented computing has become the main stream research field nowadays. Meanwhile, machine learning is a promising AI technology which can enhance the performance of traditional algorithm. Therefore, aiming at solving service discovery problem, this paper imports Bayesian classifier to web service discovery framework, which can improve service querying speed. In this framework, services in service library become training set of Bayesian classifier, service query becomes a testing sample. Service matchmaking process can be executed in related service class, which has fewer services, thus can save time. Due to don't know the class of service in training set, EM algorithm is used to estimate prior probability and likelihood functions. Experiment results show that the EM algorithm and Bayesian classifier supported method outperforms other methods in time complexity.

  6. Improve online boosting algorithm from self-learning cascade classifier

    NASA Astrophysics Data System (ADS)

    Luo, Dapeng; Sang, Nong; Huang, Rui; Tong, Xiaojun

    2010-04-01

    Online boosting algorithm has been used in many vision-related applications, such as object detection. However, in order to obtain good detection result, combining a large number of weak classifiers into a strong classifier is required. And those weak classifiers must be updated and improved online. So the training and detection speed will be reduced inevitably. This paper proposes a novel online boosting based learning method, called self-learning cascade classifier. Cascade decision strategy is integrated with the online boosting procedure. The resulting system contains enough number of weak classifiers while keeping computation cost low. The cascade structure is learned and updated online. And the structure complexity can be increased adaptively when detection task is more difficult. Moreover, most of new samples are labeled by tracking automatically. This can greatly reduce the effort by labeler. We present experimental results that demonstrate the efficient and high detection rate of the method.

  7. Algorithm for classifying multiple targets using acoustic signatures

    NASA Astrophysics Data System (ADS)

    Damarla, Thyagaraju; Pham, Tien; Lake, Douglas

    2004-08-01

    In this paper we discuss an algorithm for classification and identification of multiple targets using acoustic signatures. We use a Multi-Variate Gaussian (MVG) classifier for classifying individual targets based on the relative amplitudes of the extracted harmonic set of frequencies. The classifier is trained on high signal-to-noise ratio data for individual targets. In order to classify and further identify each target in a multi-target environment (e.g., a convoy), we first perform bearing tracking and data association. Once the bearings of the targets present are established, we next beamform in the direction of each individual target to spatially isolate it from the other targets (or interferers). Then, we further process and extract a harmonic feature set from each beamformed output. Finally, we apply the MVG classifier on each harmonic feature set for vehicle classification and identification. We present classification/identification results for convoys of three to five ground vehicles.

  8. Classifying scaled and rotated textures using a region-matched algorithm

    NASA Astrophysics Data System (ADS)

    Yao, Chih-Chia; Chen, Yu-Tin

    2012-07-01

    A novel method to correct texture variations resulting from scale magnification, narrowing caused by cropping into the original size, or spatial rotation is discussed. The variations usually occur in images captured by a camera using different focal lengths. A representative region-matched algorithm is developed to improve texture classification after magnification, narrowing, and spatial rotation. By using a minimum ellipse, a representative region-matched algorithm encloses a specific region extracted by the J-image segmentation algorithm. After translating the coordinates, the equation of an ellipse in the rotated texture can be formulated as that of an ellipse in the original texture. The rotated invariant property of ellipse provides an efficient method to identify the rotated texture. Additionally, the scale-variant representative region can be classified by adopting scale-invariant parameters. Moreover, a hybrid texture filter is developed. In the hybrid texture filter, the scheme of texture feature extraction includes the Gabor wavelet and the representative region-matched algorithm. Support vector machines are introduced as the classifier. The proposed hybrid texture filter performs excellently with respect to classifying both the stochastic and structural textures. Furthermore, experimental results demonstrate that the proposed algorithm outperforms conventional design algorithms.

  9. Improved piecewise orthogonal signal correction algorithm.

    PubMed

    Feudale, Robert N; Tan, Huwei; Brown, Steven D

    2003-10-01

    Piecewise orthogonal signal correction (POSC), an algorithm that performs local orthogonal filtering, was recently developed to process spectral signals. POSC was shown to improve partial leastsquares regression models over models built with conventional OSC. However, rank deficiencies within the POSC algorithm lead to artifacts in the filtered spectra when removing two or more POSC components. Thus, an updated OSC algorithm for use with the piecewise procedure is reported. It will be demonstrated how the mathematics of this updated OSC algorithm were derived from the previous version and why some OSC versions may not be as appropriate to use with the piecewise modeling procedure as the algorithm reported here. PMID:14639746

  10. Classifying Volcanic Activity Using an Empirical Decision Making Algorithm

    NASA Astrophysics Data System (ADS)

    Junek, W. N.; Jones, W. L.; Woods, M. T.

    2012-12-01

    Detection and classification of developing volcanic activity is vital to eruption forecasting. Timely information regarding an impending eruption would aid civil authorities in determining the proper response to a developing crisis. In this presentation, volcanic activity is characterized using an event tree classifier and a suite of empirical statistical models derived through logistic regression. Forecasts are reported in terms of the United States Geological Survey (USGS) volcano alert level system. The algorithm employs multidisciplinary data (e.g., seismic, GPS, InSAR) acquired by various volcano monitoring systems and source modeling information to forecast the likelihood that an eruption, with a volcanic explosivity index (VEI) > 1, will occur within a quantitatively constrained area. Logistic models are constructed from a sparse and geographically diverse dataset assembled from a collection of historic volcanic unrest episodes. Bootstrapping techniques are applied to the training data to allow for the estimation of robust logistic model coefficients. Cross validation produced a series of receiver operating characteristic (ROC) curves with areas ranging between 0.78-0.81, which indicates the algorithm has good predictive capabilities. The ROC curves also allowed for the determination of a false positive rate and optimum detection for each stage of the algorithm. Forecasts for historic volcanic unrest episodes in North America and Iceland were computed and are consistent with the actual outcome of the events.

  11. Sampling design for classifying contaminant level using annealing search algorithms

    NASA Astrophysics Data System (ADS)

    Christakos, George; Killam, Bart R.

    1993-12-01

    A stochastic method for sampling spatially distributed contaminant level is presented. The purpose of sampling is to partition the contaminated region into zones of high and low pollutant concentration levels. In particular, given an initial set of observations of a contaminant within a site, it is desired to find a set of additional sampling locations in a way that takes into consideration the spatial variability characteristics of the site and optimizes certain objective functions emerging from the physical, regulatory and monetary considerations of the specific site cleanup process. Since the interest is in classifying the domain into zones above and below a pollutant threshold level, a natural criterion is the cost of misclassification. The resulting objective function is the expected value of a spatial loss function associated with sampling. Stochastic expectation involves the joint probability distribution of the pollutant level and its estimate, where the latter is calculated by means of spatial estimation techniques. Actual computation requires the discretization of the contaminated domain. As a consequence, any reasonably sized problem results in combinatorics precluding an exhaustive search. The use of an annealing algorithm, although suboptimal, can find a good set of future sampling locations quickly and efficiently. In order to obtain insight about the parameters and the computational requirements of the method, an example is discussed in detail. The implementation of spatial sampling design in practice will provide the model inputs necessary for waste site remediation, groundwater management, and environmental decision making.

  12. Spectral areas and ratios classifier algorithm for pancreatic tissue classification using optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Chandra, Malavika; Scheiman, James; Simeone, Diane; McKenna, Barbara; Purdy, Julianne; Mycek, Mary-Ann

    2010-01-01

    Pancreatic adenocarcinoma is one of the leading causes of cancer death, in part because of the inability of current diagnostic methods to reliably detect early-stage disease. We present the first assessment of the diagnostic accuracy of algorithms developed for pancreatic tissue classification using data from fiber optic probe-based bimodal optical spectroscopy, a real-time approach that would be compatible with minimally invasive diagnostic procedures for early cancer detection in the pancreas. A total of 96 fluorescence and 96 reflectance spectra are considered from 50 freshly excised tissue sites-including human pancreatic adenocarcinoma, chronic pancreatitis (inflammation), and normal tissues-on nine patients. Classification algorithms using linear discriminant analysis are developed to distinguish among tissues, and leave-one-out cross-validation is employed to assess the classifiers' performance. The spectral areas and ratios classifier (SpARC) algorithm employs a combination of reflectance and fluorescence data and has the best performance, with sensitivity, specificity, negative predictive value, and positive predictive value for correctly identifying adenocarcinoma being 85, 89, 92, and 80%, respectively.

  13. TIRS stray light correction: algorithms and performance

    NASA Astrophysics Data System (ADS)

    Gerace, Aaron; Montanaro, Matthew; Beckmann, Tim; Tyrrell, Kaitlin; Cozzo, Alexandra; Carney, Trevor; Ngan, Vicki

    2015-09-01

    The Thermal Infrared Sensor (TIRS) onboard Landsat 8 was tasked with continuing thermal band measurements of the Earth as part of the Landsat program. From first light in early 2013, there were obvious indications that stray light was contaminating the thermal image data collected from the instrument. Traditional calibration techniques did not perform adequately as non-uniform banding was evident in the corrected data and error in absolute estimates of temperature over trusted buoys sites varied seasonally and, in worst cases, exceeded 9 K error. The development of an operational technique to remove the effects of the stray light has become a high priority to enhance the utility of the TIRS data. This paper introduces the current algorithm being tested by Landsat's calibration and validation team to remove stray light from TIRS image data. The integration of the algorithm into the EROS test system is discussed with strategies for operationalizing the method emphasized. Techniques for assessing the methodologies used are presented and potential refinements to the algorithm are suggested. Initial results indicate that the proposed algorithm significantly removes stray light artifacts from the image data. Specifically, visual and quantitative evidence suggests that the algorithm practically eliminates banding in the image data. Additionally, the seasonal variation in absolute errors is flattened and, in the worst case, errors of over 9 K are reduced to within 2 K. Future work focuses on refining the algorithm based on these findings and applying traditional calibration techniques to enhance the final image product.

  14. An iterative subaperture position correction algorithm

    NASA Astrophysics Data System (ADS)

    Lo, Weng-Hou; Lin, Po-Chih; Chen, Yi-Chun

    2015-08-01

    The subaperture stitching interferometry is a technique suitable for testing high numerical-aperture optics, large-diameter spherical lenses and aspheric optics. In the stitching process, each subaperture has to be placed at its correct position in a global coordinate, and the positioning precision would affect the accuracy of stitching result. However, the mechanical limitations in the alignment process as well as vibrations during the measurement would induce inevitable subaperture position uncertainties. In our previous study, a rotational scanning subaperture stitching interferometer has been constructed. This paper provides an iterative algorithm to correct the subaperture position without altering the interferometer configuration. Each subaperture is first placed at its geometric position estimated according to the F number of reference lens, the measurement zenithal angle and the number of pixels along the width of subaperture. By using the concept of differentiation, a shift compensator along the radial direction of the global coordinate is added into the stitching algorithm. The algorithm includes two kinds of compensators: one for the geometric null with four compensators of piston, two directional tilts and defocus, and the other for the position correction with the shift compensator. These compensators are computed iteratively to minimize the phase differences in the overlapped regions of subapertures in a least-squares sense. The simulation results demonstrate that the proposed method works to the position accuracy of 0.001 pixels for both the single-ring and multiple-ring configurations. Experimental verifications with the single-ring and multiple-ring data also show the effectiveness of the algorithm.

  15. Atmospheric Correction Algorithm for Hyperspectral Imagery

    SciTech Connect

    R. J. Pollina

    1999-09-01

    In December 1997, the US Department of Energy (DOE) established a Center of Excellence (Hyperspectral-Multispectral Algorithm Research Center, HyMARC) for promoting the research and development of algorithms to exploit spectral imagery. This center is located at the DOE Remote Sensing Laboratory in Las Vegas, Nevada, and is operated for the DOE by Bechtel Nevada. This paper presents the results to date of a research project begun at the center during 1998 to investigate the correction of hyperspectral data for atmospheric aerosols. Results of a project conducted by the Rochester Institute of Technology to define, implement, and test procedures for absolute calibration and correction of hyperspectral data to absolute units of high spectral resolution imagery will be presented. Hybrid techniques for atmospheric correction using image or spectral scene data coupled through radiative propagation models will be specifically addressed. Results of this effort to analyze HYDICE sensor data will be included. Preliminary results based on studying the performance of standard routines, such as Atmospheric Pre-corrected Differential Absorption and Nonlinear Least Squares Spectral Fit, in retrieving reflectance spectra show overall reflectance retrieval errors of approximately one to two reflectance units in the 0.4- to 2.5-micron-wavelength region (outside of the absorption features). These results are based on HYDICE sensor data collected from the Southern Great Plains Atmospheric Radiation Measurement site during overflights conducted in July of 1997. Results of an upgrade made in the model-based atmospheric correction techniques, which take advantage of updates made to the moderate resolution atmospheric transmittance model (MODTRAN 4.0) software, will also be presented. Data will be shown to demonstrate how the reflectance retrieval in the shorter wavelengths of the blue-green region will be improved because of enhanced modeling of multiple scattering effects.

  16. An Evaluation of Information Criteria Use for Correct Cross-Classified Random Effects Model Selection

    ERIC Educational Resources Information Center

    Beretvas, S. Natasha; Murphy, Daniel L.

    2013-01-01

    The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…

  17. Algorithmic Error Correction of Impedance Measuring Sensors

    PubMed Central

    Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira

    2009-01-01

    This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177

  18. Quantum computations: algorithms and error correction

    NASA Astrophysics Data System (ADS)

    Kitaev, A. Yu

    1997-12-01

    Contents §0. Introduction §1. Abelian problem on the stabilizer §2. Classical models of computations2.1. Boolean schemes and sequences of operations2.2. Reversible computations §3. Quantum formalism3.1. Basic notions and notation3.2. Transformations of mixed states3.3. Accuracy §4. Quantum models of computations4.1. Definitions and basic properties4.2. Construction of various operators from the elements of a basis4.3. Generalized quantum control and universal schemes §5. Measurement operators §6. Polynomial quantum algorithm for the stabilizer problem §7. Computations with perturbations: the choice of a model §8. Quantum codes (definitions and general properties)8.1. Basic notions and ideas8.2. One-to-one codes8.3. Many-to-one codes §9. Symplectic (additive) codes9.1. Algebraic preparation9.2. The basic construction9.3. Error correction procedure9.4. Torus codes §10. Error correction in the computation process: general principles10.1. Definitions and results10.2. Proofs §11. Error correction: concrete procedures11.1. The symplecto-classical case11.2. The case of a complete basis Bibliography

  19. Combining classifiers generated by multi-gene genetic programming for protein fold recognition using genetic algorithm.

    PubMed

    Bardsiri, Mahshid Khatibi; Eftekhari, Mahdi; Mousavi, Reza

    2015-01-01

    In this study the problem of protein fold recognition, that is a classification task, is solved via a hybrid of evolutionary algorithms namely multi-gene Genetic Programming (GP) and Genetic Algorithm (GA). Our proposed method consists of two main stages and is performed on three datasets taken from the literature. Each dataset contains different feature groups and classes. In the first step, multi-gene GP is used for producing binary classifiers based on various feature groups for each class. Then, different classifiers obtained for each class are combined via weighted voting so that the weights are determined through GA. At the end of the first step, there is a separate binary classifier for each class. In the second stage, the obtained binary classifiers are combined via GA weighting in order to generate the overall classifier. The final obtained classifier is superior to the previous works found in the literature in terms of classification accuracy. PMID:25786796

  20. Comparison of Genetic Algorithm, Particle Swarm Optimization and Biogeography-based Optimization for Feature Selection to Classify Clusters of Microcalcifications

    NASA Astrophysics Data System (ADS)

    Khehra, Baljit Singh; Pharwaha, Amar Partap Singh

    2016-06-01

    Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.

  1. Combining Bayesian classifiers and estimation of distribution algorithms for optimization in continuous domains

    NASA Astrophysics Data System (ADS)

    Miquelez, Teresa; Bengoetxea, Endika; Mendiburu, Alexander; Larranaga, Pedro

    2007-12-01

    This paper introduces a evolutionary computation method that applies Bayesian classifiers to optimization problems. This approach is based on Estimation of Distribution Algorithms (EDAs) in which Bayesian or Gaussian networks are applied to the evolution of a population of individuals (i.e. potential solutions to the optimization problem) in order to improve the quality of the individuals of the next generation. Our new approach, called Evolutionary Bayesian Classifier-based Optimization Algorithm (EBCOA), employs Bayesian classifiers instead of Bayesian or Gaussian networks in order to evolve individuals to a fitter population. In brief, EBCOAs are characterized by applying Bayesian classification techniques - usually applied to supervised classification problems - to optimization in continuous domains. We propose and review in this paper different Bayesian classifiers for implementing our EBCOA method, focusing particularly on EBCOAs applying naive Bayes, semi-na¨ive Bayes, and tree augmented na¨ive Bayes classifiers. This work presents a deep study on the behavior of these algorithms with classical optimiztion problems in continuous domains. The different parameters used for tuning the performance of the algorithms are discussed, and a comprehensive overview of their influence is provided. We also present experimental results to compare this new method with other state of the art approaches of the evolutionary computation field for continuous domains such as Evolutionary Strategies (ES) and Estimation of Distribution Algorithms (EDAs).

  2. GACEM: Genetic Algorithm Based Classifier Ensemble in a Multi-sensor System

    PubMed Central

    Xu, Rongwu; He, Lin

    2008-01-01

    Multi-sensor systems (MSS) have been increasingly applied in pattern classification while searching for the optimal classification framework is still an open problem. The development of the classifier ensemble seems to provide a promising solution. The classifier ensemble is a learning paradigm where many classifiers are jointly used to solve a problem, which has been proven an effective method for enhancing the classification ability. In this paper, by introducing the concept of Meta-feature (MF) and Trans-function (TF) for describing the relationship between the nature and the measurement of the observed phenomenon, classification in a multi-sensor system can be unified in the classifier ensemble framework. Then an approach called Genetic Algorithm based Classifier Ensemble in Multi-sensor system (GACEM) is presented, where a genetic algorithm is utilized for optimization of both the selection of features subset and the decision combination simultaneously. GACEM trains a number of classifiers based on different combinations of feature vectors at first and then selects the classifiers whose weight is higher than the pre-set threshold to make up the ensemble. An empirical study shows that, compared with the conventional feature-level voting and decision-level voting, not only can GACEM achieve better and more robust performance, but also simplify the system markedly.

  3. Character recognition using min-max classifiers designed via an LMS algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Ping-Fai; Maragos, Petros

    1992-11-01

    In this paper we propose a Least Mean Square (LMS) algorithm for the practical training of the class of min-max classifiers. These are lattice-theoretic generalization of Boolean functions and are also related to feed-forward neural networks and morphological signal operators. We applied the LMS algorithm to the problem of handwritten character recognition. The database consists of segmented and cleaned digits. Features that were extracted from the digits include Fourier descriptors and morphological shape-size histograms. Experimental results using the LMS algorithm for handwritten character recognition are promising. In our initial experimentation, we applied the min-max classifier to binary classification of '0' and '1' digits. By preprocessing the feature vectors, we were able to achieve an error rate of 1.75% for a training set of size 1200 (600 of each digit); and an error rate of 4.5% on a test set of size 400 (200 of each). These figures are comparable to those obtained by 2-layer neural nets trained using back propagation. The major advantage of min-max classifiers compared to neural networks is their simplicity and the faster convergence of their training algorithm.

  4. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  5. A microwave radiometer weather-correcting sea ice algorithm

    NASA Technical Reports Server (NTRS)

    Walters, J. M.; Ruf, C.; Swift, C. T.

    1987-01-01

    A new algorithm for estimating the proportions of the multiyear and first-year sea ice types under variable atmospheric and sea surface conditions is presented, which uses all six channels of the SMMR. The algorithm is specifically tuned to derive sea ice parameters while accepting error in the auxiliary parameters of surface temperature, ocean surface wind speed, atmospheric water vapor, and cloud liquid water content. Not only does the algorithm naturally correct for changes in these weather conditions, but it retrieves sea ice parameters to the extent that gross errors in atmospheric conditions propagate only small errors into the sea ice retrievals. A preliminary evaluation indicates that the weather-correcting algorithm provides a better data product than the 'UMass-AES' algorithm, whose quality has been cross checked with independent surface observations. The algorithm performs best when the sea ice concentration is less than 20 percent.

  6. Bias correction for selecting the minimal-error classifier from many machine learning models

    PubMed Central

    Ding, Ying; Tang, Shaowu; Liao, Serena G.; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C.

    2014-01-01

    Motivation: Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30–60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. Results: In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package ‘MLbias’ and all source files are publicly available. Availability and implementation: tsenglab.biostat.pitt.edu/software.htm. Contact: ctseng@pitt.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25086004

  7. Genetic algorithm for chromaticity correction in diffraction limited storage rings

    NASA Astrophysics Data System (ADS)

    Ehrlichman, M. P.

    2016-04-01

    A multiobjective genetic algorithm is developed for optimizing nonlinearities in diffraction limited storage rings. This algorithm determines sextupole and octupole strengths for chromaticity correction that deliver optimized dynamic aperture and beam lifetime. The algorithm makes use of dominance constraints to breed desirable properties into the early generations. The momentum aperture is optimized indirectly by constraining the chromatic tune footprint and optimizing the off-energy dynamic aperture. The result is an effective and computationally efficient technique for correcting chromaticity in a storage ring while maintaining optimal dynamic aperture and beam lifetime.

  8. An Efficient Fitness Function in Genetic Algorithm Classifier for Landuse Recognition on Satellite Images

    PubMed Central

    Yang, Yeh-Fen; Su, Tung-Ching; Huang, Kai-Siang

    2014-01-01

    Genetic algorithm (GA) is designed to search the optimal solution via weeding out the worse gene strings based on a fitness function. GA had demonstrated effectiveness in solving the problems of unsupervised image classification, one of the optimization problems in a large domain. Many indices or hybrid algorithms as a fitness function in a GA classifier are built to improve the classification accuracy. This paper proposes a new index, DBFCMI, by integrating two common indices, DBI and FCMI, in a GA classifier to improve the accuracy and robustness of classification. For the purpose of testing and verifying DBFCMI, well-known indices such as DBI, FCMI, and PASI are employed as well for comparison. A SPOT-5 satellite image in a partial watershed of Shihmen reservoir is adopted as the examined material for landuse classification. As a result, DBFCMI acquires higher overall accuracy and robustness than the rest indices in unsupervised classification. PMID:24701151

  9. A novel algorithm for simplification of complex gene classifiers in cancer

    PubMed Central

    Wilson, Raphael A.; Teng, Ling; Bachmeyer, Karen M.; Bissonnette, Mei Lin Z.; Husain, Aliya N.; Parham, David M.; Triche, Timothy J.; Wing, Michele R.; Gastier-Foster, Julie M.; Barr, Frederic G.; Hawkins, Douglas S.; Anderson, James R.; Skapek, Stephen X.; Volchenboum, Samuel L.

    2013-01-01

    The clinical application of complex molecular classifiers as diagnostic or prognostic tools has been limited by the time and cost needed to apply them to patients. Using an existing fifty-gene expression signature known to separate two molecular subtypes of the pediatric cancer rhabdomyosarcoma, we show that an exhaustive iterative search algorithm can distill this complex classifier down to two or three features with equal discrimination. We validated the two-gene signatures using three separate and distinct data sets, including one that uses degraded RNA extracted from formalin-fixed, paraffin-embedded material. Finally, to demonstrate the generalizability of our algorithm, we applied it to a lung cancer data set to find minimal gene signatures that can distinguish survival. Our approach can easily be generalized and coupled to existing technical platforms to facilitate the discovery of simplified signatures that are ready for routine clinical use. PMID:23913937

  10. Algorithmic scatter correction in dual-energy digital mammography

    SciTech Connect

    Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.; Lau, Beverly A.; Chan, Suk-tak; Zhang, Lei

    2013-11-15

    Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In this paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of

  11. Classifying spatially heterogeneous wetland communities using machine learning algorithms and spectral and textural features.

    PubMed

    Szantoi, Zoltan; Escobedo, Francisco J; Abd-Elrahman, Amr; Pearlstine, Leonard; Dewitt, Bon; Smith, Scot

    2015-05-01

    Mapping of wetlands (marsh vs. swamp vs. upland) is a common remote sensing application.Yet, discriminating between similar freshwater communities such as graminoid/sedge fromremotely sensed imagery is more difficult. Most of this activity has been performed using medium to low resolution imagery. There are only a few studies using highspatial resolutionimagery and machine learning image classification algorithms for mapping heterogeneouswetland plantcommunities. This study addresses this void by analyzing whether machine learning classifierssuch as decisiontrees (DT) and artificial neural networks (ANN) can accurately classify graminoid/sedgecommunities usinghigh resolution aerial imagery and image texture data in the Everglades National Park, Florida.In addition tospectral bands, the normalized difference vegetation index, and first- and second-order texturefeatures derivedfrom the near-infrared band were analyzed. Classifier accuracies were assessed using confusiontablesand the calculated kappa coefficients of the resulting maps. The results indicated that an ANN(multilayerperceptron based on backpropagation) algorithm produced a statistically significantly higheraccuracy(82.04%) than the DT (QUEST) algorithm (80.48%) or the maximum likelihood (80.56%)classifier (α<0.05). Findings show that using multiple window sizes provided the best results. First-ordertexture featuresalso provided computational advantages and results that were not significantly different fromthose usingsecond-order texture features. PMID:25893753

  12. A burst-correcting algorithm for Reed Solomon codes

    NASA Technical Reports Server (NTRS)

    Chen, J.; Owsley, P.

    1990-01-01

    The Bose, Chaudhuri, and Hocquenghem (BCH) codes form a large class of powerful error-correcting cyclic codes. Among the non-binary BCH codes, the most important subclass is the Reed Solomon (RS) codes. Reed Solomon codes have the ability to correct random and burst errors. It is well known that an (n,k) RS code can correct up to (n-k)/2 random errors. When burst errors are involved, the error correcting ability of the RS code can be increased beyond (n-k)/2. It has previously been show that RS codes can reliably correct burst errors of length greater than (n-k)/2. In this paper, a new decoding algorithm is given which can also correct a burst error of length greater than (n-k)/2.

  13. Algorithms for Image Analysis and Combination of Pattern Classifiers with Application to Medical Diagnosis

    NASA Astrophysics Data System (ADS)

    Georgiou, Harris

    2009-10-01

    Medical Informatics and the application of modern signal processing in the assistance of the diagnostic process in medical imaging is one of the more recent and active research areas today. This thesis addresses a variety of issues related to the general problem of medical image analysis, specifically in mammography, and presents a series of algorithms and design approaches for all the intermediate levels of a modern system for computer-aided diagnosis (CAD). The diagnostic problem is analyzed with a systematic approach, first defining the imaging characteristics and features that are relevant to probable pathology in mammo-grams. Next, these features are quantified and fused into new, integrated radio-logical systems that exhibit embedded digital signal processing, in order to improve the final result and minimize the radiological dose for the patient. In a higher level, special algorithms are designed for detecting and encoding these clinically interest-ing imaging features, in order to be used as input to advanced pattern classifiers and machine learning models. Finally, these approaches are extended in multi-classifier models under the scope of Game Theory and optimum collective deci-sion, in order to produce efficient solutions for combining classifiers with minimum computational costs for advanced diagnostic systems. The material covered in this thesis is related to a total of 18 published papers, 6 in scientific journals and 12 in international conferences.

  14. Algorithm for Atmospheric Corrections of Aircraft and Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Fraser, Robert S.; Kaufman, Yoram J.; Ferrare, Richard A.; Mattoo, Shana

    1989-01-01

    A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 micron. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.

  15. Multiangle Implementation of Atmospheric Correction (MAIAC): 2. Aerosol Algorithm

    NASA Technical Reports Server (NTRS)

    Lyapustin, A.; Wang, Y.; Laszlo, I.; Kahn, R.; Korkin, S.; Remer, L.; Levy, R.; Reid, J. S.

    2011-01-01

    An aerosol component of a new multiangle implementation of atmospheric correction (MAIAC) algorithm is presented. MAIAC is a generic algorithm developed for the Moderate Resolution Imaging Spectroradiometer (MODIS), which performs aerosol retrievals and atmospheric correction over both dark vegetated surfaces and bright deserts based on a time series analysis and image-based processing. The MAIAC look-up tables explicitly include surface bidirectional reflectance. The aerosol algorithm derives the spectral regression coefficient (SRC) relating surface bidirectional reflectance in the blue (0.47 micron) and shortwave infrared (2.1 micron) bands; this quantity is prescribed in the MODIS operational Dark Target algorithm based on a parameterized formula. The MAIAC aerosol products include aerosol optical thickness and a fine-mode fraction at resolution of 1 km. This high resolution, required in many applications such as air quality, brings new information about aerosol sources and, potentially, their strength. AERONET validation shows that the MAIAC and MOD04 algorithms have similar accuracy over dark and vegetated surfaces and that MAIAC generally improves accuracy over brighter surfaces due to the SRC retrieval and explicit bidirectional reflectance factor characterization, as demonstrated for several U.S. West Coast AERONET sites. Due to its generic nature and developed angular correction, MAIAC performs aerosol retrievals over bright deserts, as demonstrated for the Solar Village Aerosol Robotic Network (AERONET) site in Saudi Arabia.

  16. EC: an efficient error correction algorithm for short reads

    PubMed Central

    2015-01-01

    Background In highly parallel next-generation sequencing (NGS) techniques millions to billions of short reads are produced from a genomic sequence in a single run. Due to the limitation of the NGS technologies, there could be errors in the reads. The error rate of the reads can be reduced with trimming and by correcting the erroneous bases of the reads. It helps to achieve high quality data and the computational complexity of many biological applications will be greatly reduced if the reads are first corrected. We have developed a novel error correction algorithm called EC and compared it with four other state-of-the-art algorithms using both real and simulated sequencing reads. Results We have done extensive and rigorous experiments that reveal that EC is indeed an effective, scalable, and efficient error correction tool. Real reads that we have employed in our performance evaluation are Illumina-generated short reads of various lengths. Six experimental datasets we have utilized are taken from sequence and read archive (SRA) at NCBI. The simulated reads are obtained by picking substrings from random positions of reference genomes. To introduce errors, some of the bases of the simulated reads are changed to other bases with some probabilities. Conclusions Error correction is a vital problem in biology especially for NGS data. In this paper we present a novel algorithm, called Error Corrector (EC), for correcting substitution errors in biological sequencing reads. We plan to investigate the possibility of employing the techniques introduced in this research paper to handle insertion and deletion errors also. Software availability The implementation is freely available for non-commercial purposes. It can be downloaded from: http://engr.uconn.edu/~rajasek/EC.zip. PMID:26678663

  17. Aerosol Retrieval and Atmospheric Correction Algorithms for EPIC

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Lyapustin, A.; Marshak, A.; Korkin, S.; Herman, J. R.

    2011-12-01

    EPIC is a multi-spectral imager onboard planned Deep Space Climate ObserVatoRy (DSCOVR) designed for observations of the full illuminated disk of the Earth with high temporal and coarse spatial resolution (10 km) from Lagrangian L1 point. During the course of the day, EPIC will view the same Earth surface area in the full range of solar and view zenith angles at equator with fixed scattering angle near the backscattering direction. This talk will describe a new aerosol retrieval/atmospheric correction algorithm developed for EPIC and tested with EPIC Simulator data. This algorithm uses the time series approach and consists of two stages: the first stage is designed to periodically re-initialize the surface spectral bidirectional reflectance (BRF) on stable low AOD days. Such days can be selected based on the same measured reflectance between the morning and afternoon reciprocal view geometries of EPIC. On the second stage, the algorithm will monitor the diurnal cycle of aerosol optical depth and fine mode fraction based on the known spectral surface BRF. Testing of the developed algorithm with simulated EPIC data over continental USA showed a good accuracy of AOD retrievals (10-20%) except over very bright surfaces.

  18. Aerosol Retrieval and Atmospheric Correction Algorithms for EPIC

    NASA Technical Reports Server (NTRS)

    Wang, Yujie; Lyapustin, Alexei; Marshak, Alexander; Korkin, Sergey; Herman, Jay

    2011-01-01

    EPIC is a multi-spectral imager onboard planned Deep Space Climate ObserVatoRy (DSCOVR) designed for observations of the full illuminated disk of the Earth with high temporal and coarse spatial resolution (10 km) from Lagrangian L1 point. During the course of the day, EPIC will view the same Earth surface area in the full range of solar and view zenith angles at equator with fixed scattering angle near the backscattering direction. This talk will describe a new aerosol retrieval/atmospheric correction algorithm developed for EPIC and tested with EPIC Simulator data. This algorithm uses the time series approach and consists of two stages: the first stage is designed to periodically re-initialize the surface spectral bidirectional reflectance (BRF) on stable low AOD days. Such days can be selected based on the same measured reflectance between the morning and afternoon reciprocal view geometries of EPIC. On the second stage, the algorithm will monitor the diurnal cycle of aerosol optical depth and fine mode fraction based on the known spectral surface BRF. Testing of the developed algorithm with simulated EPIC data over continental USA showed a good accuracy of AOD retrievals (10-20%) except over very bright surfaces.

  19. Classifying Response Correctness across Different Task Sets: A Machine Learning Approach

    PubMed Central

    Wascher, Edmund; Falkenstein, Michael

    2016-01-01

    Erroneous behavior usually elicits a distinct pattern in neural waveforms. In particular, inspection of the concurrent recorded electroencephalograms (EEG) typically reveals a negative potential at fronto-central electrodes shortly following a response error (Ne or ERN) as well as an error-awareness-related positivity (Pe). Seemingly, the brain signal contains information about the occurrence of an error. Assuming a general error evaluation system, the question arises whether this information can be utilized in order to classify behavioral performance within or even across different cognitive tasks. In the present study, a machine learning approach was employed to investigate the outlined issue. Ne as well as Pe were extracted from the single-trial EEG signals of participants conducting a flanker and a mental rotation task and subjected to a machine learning classification scheme (via a support vector machine, SVM). Overall, individual performance in the flanker task was classified more accurately, with accuracy rates of above 85%. Most importantly, it was even feasible to classify responses across both tasks. In particular, an SVM trained on the flanker task could identify erroneous behavior with almost 70% accuracy in the EEG data recorded during the rotation task, and vice versa. Summed up, we replicate that the response-related EEG signal can be used to identify erroneous behavior within a particular task. Going beyond this, it was possible to classify response types across functionally different tasks. Therefore, the outlined methodological approach appears promising with respect to future applications. PMID:27032108

  20. Classifying Response Correctness across Different Task Sets: A Machine Learning Approach.

    PubMed

    Plewan, Thorsten; Wascher, Edmund; Falkenstein, Michael; Hoffmann, Sven

    2016-01-01

    Erroneous behavior usually elicits a distinct pattern in neural waveforms. In particular, inspection of the concurrent recorded electroencephalograms (EEG) typically reveals a negative potential at fronto-central electrodes shortly following a response error (Ne or ERN) as well as an error-awareness-related positivity (Pe). Seemingly, the brain signal contains information about the occurrence of an error. Assuming a general error evaluation system, the question arises whether this information can be utilized in order to classify behavioral performance within or even across different cognitive tasks. In the present study, a machine learning approach was employed to investigate the outlined issue. Ne as well as Pe were extracted from the single-trial EEG signals of participants conducting a flanker and a mental rotation task and subjected to a machine learning classification scheme (via a support vector machine, SVM). Overall, individual performance in the flanker task was classified more accurately, with accuracy rates of above 85%. Most importantly, it was even feasible to classify responses across both tasks. In particular, an SVM trained on the flanker task could identify erroneous behavior with almost 70% accuracy in the EEG data recorded during the rotation task, and vice versa. Summed up, we replicate that the response-related EEG signal can be used to identify erroneous behavior within a particular task. Going beyond this, it was possible to classify response types across functionally different tasks. Therefore, the outlined methodological approach appears promising with respect to future applications. PMID:27032108

  1. The construction of support vector machine classifier using the firefly algorithm.

    PubMed

    Chao, Chih-Feng; Horng, Ming-Huwi

    2015-01-01

    The setting of parameters in the support vector machines (SVMs) is very important with regard to its accuracy and efficiency. In this paper, we employ the firefly algorithm to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier. The proposed method is called the firefly-based SVM (firefly-SVM). This tool is not considered the feature selection, because the SVM, together with feature selection, is not suitable for the application in a multiclass classification, especially for the one-against-all multiclass SVM. In experiments, binary and multiclass classifications are explored. In the experiments on binary classification, ten of the benchmark data sets of the University of California, Irvine (UCI), machine learning repository are used; additionally the firefly-SVM is applied to the multiclass diagnosis of ultrasonic supraspinatus images. The classification performance of firefly-SVM is also compared to the original LIBSVM method associated with the grid search method and the particle swarm optimization based SVM (PSO-SVM). The experimental results advocate the use of firefly-SVM to classify pattern classifications for maximum accuracy. PMID:25802511

  2. The Construction of Support Vector Machine Classifier Using the Firefly Algorithm

    PubMed Central

    Chao, Chih-Feng; Horng, Ming-Huwi

    2015-01-01

    The setting of parameters in the support vector machines (SVMs) is very important with regard to its accuracy and efficiency. In this paper, we employ the firefly algorithm to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier. The proposed method is called the firefly-based SVM (firefly-SVM). This tool is not considered the feature selection, because the SVM, together with feature selection, is not suitable for the application in a multiclass classification, especially for the one-against-all multiclass SVM. In experiments, binary and multiclass classifications are explored. In the experiments on binary classification, ten of the benchmark data sets of the University of California, Irvine (UCI), machine learning repository are used; additionally the firefly-SVM is applied to the multiclass diagnosis of ultrasonic supraspinatus images. The classification performance of firefly-SVM is also compared to the original LIBSVM method associated with the grid search method and the particle swarm optimization based SVM (PSO-SVM). The experimental results advocate the use of firefly-SVM to classify pattern classifications for maximum accuracy. PMID:25802511

  3. The Effect of Age Correction on Multivariate Classification in Alzheimer's Disease, with a Focus on the Characteristics of Incorrectly and Correctly Classified Subjects.

    PubMed

    Falahati, Farshad; Ferreira, Daniel; Soininen, Hilkka; Mecocci, Patrizia; Vellas, Bruno; Tsolaki, Magda; Kłoszewska, Iwona; Lovestone, Simon; Eriksdotter, Maria; Wahlund, Lars-Olof; Simmons, Andrew; Westman, Eric

    2016-03-01

    The similarity of atrophy patterns in Alzheimer's disease (AD) and in normal aging suggests age as a confounding factor in multivariate models that use structural magnetic resonance imaging (MRI) data. To study the effect and compare different age correction approaches on AD diagnosis and prediction of mild cognitive impairment (MCI) progression as well as investigate the characteristics of correctly and incorrectly classified subjects. Data from two multi-center cohorts were included in the study [AD = 297, MCI = 445, controls (CTL) = 340]. 34 cortical thickness and 21 subcortical volumetric measures were extracted from MRI. The age correction approaches involved: using age as a covariate to MRI-derived measures and linear detrending of age-related changes based on CTL measures. Orthogonal projections to latent structures was used to discriminate between AD and CTL subjects, and to predict MCI progression to AD, up to 36-months follow-up. Both age correction approaches improved models' quality in terms of goodness of fit and goodness of prediction, as well as classification and prediction accuracies. The observed age associations in classification and prediction results were effectively eliminated after age correction. A detailed analysis of correctly and incorrectly classified subjects highlighted age associations in other factors: ApoE genotype, global cognitive impairment and gender. The two methods for age correction gave similar results and show that age can partially masks the influence of other aspects such as cognitive impairment, ApoE-e4 genotype and gender. Age-related brain atrophy may have a more important association with these factors than previously believed. PMID:26440606

  4. Coastal Zone Color Scanner atmospheric correction algorithm: multiple scattering effects.

    PubMed

    Gordon, H R; Castaño, D J

    1987-06-01

    An analysis of the errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm is presented in detail. This was prompted by the observations of others that significant errors would be encountered if the present algorithm were applied to a hypothetical instrument possessing higher radiometric sensitivity than the present CZCS. This study provides CZCS users sufficient information with which to judge the efficacy of the current algorithm with the current sensor and enables them to estimate the impact of the algorithm-induced errors on their applications in a variety of situations. The greatest source of error is the assumption that the molecular and aerosol contributions to the total radiance observed at the sensor can be computed separately. This leads to the requirement that a value epsilon'(lambda,lambda(0)) for the atmospheric correction parameter, which bears little resemblance to its theoretically meaningful counterpart, must usually be employed in the algorithm to obtain an accurate atmospheric correction. The behavior of '(lambda,lambda(0)) with the aerosol optical thickness and aerosol phase function is thoroughly investigated through realistic modeling of radiative transfer in a stratified atmosphere over a Fresnel reflecting ocean. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates allowing elucidation of the errors along typical CZCS scan lines; this is important since, in the normal application of the algorithm, it is assumed that the same value of can be used for an entire CZCS scene or at least for a reasonably large subscene. Two types of variation of ' are found in models for which it would be constant in the single scattering approximation: (1) variation with scan angle in scenes in which a relatively large portion of the aerosol scattering phase function would be examined

  5. Efficient single image non-uniformity correction algorithm

    NASA Astrophysics Data System (ADS)

    Tendero, Y.; Gilles, J.; Landeau, S.; Morel, J. M.

    2010-10-01

    This paper introduces a new way to correct the non-uniformity (NU) in uncooled infrared-type images. The main defect of these uncooled images is the lack of a column (resp. line) time-dependent cross-calibration, resulting in a strong column (resp. line) and time dependent noise. This problem can be considered as a 1D flicker of the columns inside each frame. Thus, classic movie deflickering algorithms can be adapted, to equalize the columns (resp. the lines). The proposed method therefore applies to the series formed by the columns of an infrared image a movie deflickering algorithm. The obtained single image method works on static images, and therefore requires no registration, no camera motion compensation, and no closed aperture sensor equalization. Thus, the method has only one camera dependent parameter, and is landscape independent. This simple method will be compared to a state of the art total variation single image correction on raw real and simulated images. The method is real time, requiring only two operations per pixel. It involves no test-pattern calibration and produces no "ghost artifacts".

  6. Development of Topological Correction Algorithms for ADCP Multibeam Bathymetry Measurements

    NASA Astrophysics Data System (ADS)

    Yang, Sung-Kee; Kim, Dong-Su; Kim, Soo-Jeong; Jung, Woo-Yul

    2013-04-01

    Acoustic Doppler Current Profilers (ADCPs) are increasingly popular in the river research and management communities being primarily used for estimation of stream flows. ADCPs capabilities, however, entail additional features that are not fully explored, such as morphologic representation of river or reservoir bed based upon multi-beam depth measurements. In addition to flow velocity, ADCP measurements include river bathymetry information through the depth measurements acquired in individual 4 or 5 beams with a given oblique angle. Such sounding capability indicates that multi-beam ADCPs can be utilized as an efficient depth-sounder to be more capable than the conventional single-beam eco-sounders. The paper introduces the post-processing algorithms required to deal with raw ADCP bathymetry measurements including the following aspects: a) correcting the individual beam depths for tilt (pitch and roll); b) filtering outliers using SMART filters; d) transforming the corrected depths into geographical coordinates by UTM conversion; and, e) tag the beam detecting locations with the concurrent GPS information; f) spatial representation in a GIS package. The developed algorithms are applied for the ADCP bathymetric dataset acquired from Han-Cheon in Juju Island to validate their applicability.

  7. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    PubMed

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience. PMID:27227718

  8. Algorithms for muscle oxygenation monitoring corrected for adipose tissue thickness

    NASA Astrophysics Data System (ADS)

    Geraskin, Dmitri; Platen, Petra; Franke, Julia; Kohl-Bareis, Matthias

    2007-07-01

    The measurement of skeletal muscle oxygenation by NIRS methods is obstructed by the subcutaneous adipose tissue which might vary between < 1 mm to more than 12 mm in thickness. A new algorithm is developed to minimize the large scattering effect of this lipid layer on the calculation of muscle haemoglobin / myoglobin concentrations. First, we demonstrate by comparison with ultrasound imaging that the optical lipid signal peaking at 930 nm is a good predictor of the adipose tissue thickness (ATT). Second, the algorithm is based on measurements of the wavelength dependence of the slope ΔA/Δρ of attenuation A with respect to source detector distance ρ and Monte Carlo simulations which estimate the muscle absorption coefficient based on this slope and the additional information of the ATT. Third, we illustrate the influence of the wavelength dependent transport scattering coefficient of the new algorithm by using the solution of the diffusion equation for a two-layered turbid medium. This method is tested on experimental data measured on the vastus lateralis muscle of volunteers during an incremental cycling exercise under normal and hypoxic conditions (corresponding to 0, 2000 and 4000 m altitude). The experimental setup uses broad band detection between 700 and 1000 nm at six source-detector distances. We demonstrate that the description of the experimental data as judged by the residual spectrum is significantly improved and the calculated changes in oxygen saturation are markedly different when the ATT correction is included.

  9. The Algorithm Theoretical Basis Document for Tidal Corrections

    NASA Technical Reports Server (NTRS)

    Fricker, Helen A.; Ridgway, Jeff R.; Minster, Jean-Bernard; Yi, Donghui; Bentley, Charles R.`

    2012-01-01

    This Algorithm Theoretical Basis Document deals with the tidal corrections that need to be applied to range measurements made by the Geoscience Laser Altimeter System (GLAS). These corrections result from the action of ocean tides and Earth tides which lead to deviations from an equilibrium surface. Since the effect of tides is dependent of the time of measurement, it is necessary to remove the instantaneous tide components when processing altimeter data, so that all measurements are made to the equilibrium surface. The three main tide components to consider are the ocean tide, the solid-earth tide and the ocean loading tide. There are also long period ocean tides and the pole tide. The approximate magnitudes of these components are illustrated in Table 1, together with estimates of their uncertainties (i.e. the residual error after correction). All of these components are important for GLAS measurements over the ice sheets since centimeter-level accuracy for surface elevation change detection is required. The effect of each tidal component is to be removed by approximating their magnitude using tidal prediction models. Conversely, assimilation of GLAS measurements into tidal models will help to improve them, especially at high latitudes.

  10. Thickness-dependent scatter correction algorithm for digital mammography

    NASA Astrophysics Data System (ADS)

    Gonzalez Trotter, Dinko E.; Tkaczyk, J. Eric; Kaufhold, John; Claus, Bernhard E. H.; Eberhard, Jeffrey W.

    2002-05-01

    We have implemented a scatter-correction algorithm (SCA) for digital mammography based on an iterative restoration filter. The scatter contribution to the image is modeled by an additive component that is proportional to the filtered unattenuated x-ray photon signal and dependent on the characteristics of the imaged object. The SCA's result is closer to the scatter-free signal than when a scatter grid is used. Presently, the SCA shows improved contrast-to-noise performance relative to the scatter grid for a breast thickness up to 3.6 cm, with potential for better performance up to 6 cm. We investigated the efficacy of our scatter-correction method on a series of x-ray images of anthropomorphic breast phantoms with maximum thicknesses ranging from 3.0 cm to 6.0 cm. A comparison of the scatter-corrected images with the scatter-free signal acquired using a slit collimator shows average deviations of 3 percent or less, even in the edge region of the phantoms. These results indicate that the SCA is superior to a scatter grid for 2D quantitative mammography applications, and may enable 3D quantitative applications in X-ray tomosynthesis.

  11. Validation of aerosol estimation in atmospheric correction algorithm ATCOR

    NASA Astrophysics Data System (ADS)

    Pflug, B.; Main-Knorn, M.; Makarau, A.; Richter, R.

    2015-04-01

    Atmospheric correction of satellite images is necessary for many applications of remote sensing, i.e. computation of vegetation indices and biomass estimation. The first step in atmospheric correction is estimation of the actual aerosol properties. Due to the spatial and temporal variability of aerosol amount and type, this step becomes crucial for an accurate correction of satellite data. Consequently, the validation of aerosol estimation contributes to the validation of atmospheric correction algorithms. In this study we present the validation of aerosol estimation using own sun photometer measurements in Central Europe and measurements of AERONET-stations at different locations in the world. Our ground-based sun photometer measurements of vertical column aerosoloptical thickness (AOT) spectra are performed synchronously to overpasses of the satellites RapidEye, Landsat 5, Landsat 7 and Landsat 8. Selected AERONET data are collocated to Landsat 8 overflights. The validation of the aerosol retrieval is conducted by a direct comparison of ground-measured AOT with satellite derived AOT using the ATCOR tool for the selected satellite images. The mean uncertainty found in our experiments is AOT550nm ~ 0.03±0.02 for cloudless conditions with cloud+haze fraction below 1%. This AOT uncertainty approximately corresponds to an uncertainty in surface albedo of ρ ~ 0.003. Inclusion of cloudy and hazy satellite images into the analysis results in mean AOT550nm ~ 0.04±0.03 for both RapidEye and Landsat imagery. About 1/3 of samples perform with the AOT uncertainty better than 0.02 and about 2/3 perform with AOT uncertainty better than 0.05.

  12. Experimental testing of four correction algorithms for the forward scattering spectrometer probe

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.

    1992-01-01

    Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.

  13. Structure and weights optimisation of a modified Elman network emotion classifier using hybrid computational intelligence algorithms: a comparative study

    NASA Astrophysics Data System (ADS)

    Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood

    2015-10-01

    Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.

  14. Ant Colony Optimization Algorithm for Interpretable Bayesian Classifiers Combination: Application to Medical Predictions

    PubMed Central

    Bouktif, Salah; Hanna, Eileen Marie; Zaki, Nazar; Khousa, Eman Abu

    2014-01-01

    Prediction and classification techniques have been well studied by machine learning researchers and developed for several real-word problems. However, the level of acceptance and success of prediction models are still below expectation due to some difficulties such as the low performance of prediction models when they are applied in different environments. Such a problem has been addressed by many researchers, mainly from the machine learning community. A second problem, principally raised by model users in different communities, such as managers, economists, engineers, biologists, and medical practitioners, etc., is the prediction models’ interpretability. The latter is the ability of a model to explain its predictions and exhibit the causality relationships between the inputs and the outputs. In the case of classification, a successful way to alleviate the low performance is to use ensemble classiers. It is an intuitive strategy to activate collaboration between different classifiers towards a better performance than individual classier. Unfortunately, ensemble classifiers method do not take into account the interpretability of the final classification outcome. It even worsens the original interpretability of the individual classifiers. In this paper we propose a novel implementation of classifiers combination approach that does not only promote the overall performance but also preserves the interpretability of the resulting model. We propose a solution based on Ant Colony Optimization and tailored for the case of Bayesian classifiers. We validate our proposed solution with case studies from medical domain namely, heart disease and Cardiotography-based predictions, problems where interpretability is critical to make appropriate clinical decisions. Availability The datasets, Prediction Models and software tool together with supplementary materials are available at http://faculty.uaeu.ac.ae/salahb/ACO4BC.htm. PMID:24498276

  15. Classifying performance impairment in response to sleep loss using pattern recognition algorithms on single session testing

    PubMed Central

    St. Hilaire, Melissa A.; Sullivan, Jason P.; Anderson, Clare; Cohen, Daniel A.; Barger, Laura K.; Lockley, Steven W.; Klerman, Elizabeth B.

    2012-01-01

    There is currently no “gold standard” marker of cognitive performance impairment resulting from sleep loss. We utilized pattern recognition algorithms to determine which features of data collected under controlled laboratory conditions could most reliably identify cognitive performance impairment in response to sleep loss using data from only one testing session, such as would occur in the “real world” or field conditions. A training set for testing the pattern recognition algorithms was developed using objective Psychomotor Vigilance Task (PVT) and subjective Karolinska Sleepiness Scale (KSS) data collected from laboratory studies during which subjects were sleep deprived for 26 – 52 hours. The algorithm was then tested in data from both laboratory and field experiments. The pattern recognition algorithm was able to identify performance impairment with a single testing session in individuals studied under laboratory conditions using PVT, KSS, length of time awake and time of day information with sensitivity and specificity as high as 82%. When this algorithm was tested on data collected under real-world conditions from individuals whose data were not in the training set, accuracy of predictions for individuals categorized with low performance impairment were as high as 98%. Predictions for medium and severe performance impairment were less accurate. We conclude that pattern recognition algorithms may be a promising method for identifying performance impairment in individuals using only current information about the individual’s behavior. Single testing features (e.g., number of PVT lapses) with high correlation with performance impairment in the laboratory setting may not be the best indicators of performance impairment under real-world conditions. Pattern recognition algorithms should be further tested for their ability to be used in conjunction with other assessments of sleepiness in real-world conditions to quantify performance impairment in

  16. An algorithm for classifying tumors based on genomic aberrations and selecting representative tumor models

    PubMed Central

    2010-01-01

    Background Cancer is a heterogeneous disease caused by genomic aberrations and characterized by significant variability in clinical outcomes and response to therapies. Several subtypes of common cancers have been identified based on alterations of individual cancer genes, such as HER2, EGFR, and others. However, cancer is a complex disease driven by the interaction of multiple genes, so the copy number status of individual genes is not sufficient to define cancer subtypes and predict responses to treatments. A classification based on genome-wide copy number patterns would be better suited for this purpose. Method To develop a more comprehensive cancer taxonomy based on genome-wide patterns of copy number abnormalities, we designed an unsupervised classification algorithm that identifies genomic subgroups of tumors. This algorithm is based on a modified genomic Non-negative Matrix Factorization (gNMF) algorithm and includes several additional components, namely a pilot hierarchical clustering procedure to determine the number of clusters, a multiple random initiation scheme, a new stop criterion for the core gNMF, as well as a 10-fold cross-validation stability test for quality assessment. Result We applied our algorithm to identify genomic subgroups of three major cancer types: non-small cell lung carcinoma (NSCLC), colorectal cancer (CRC), and malignant melanoma. High-density SNP array datasets for patient tumors and established cell lines were used to define genomic subclasses of the diseases and identify cell lines representative of each genomic subtype. The algorithm was compared with several traditional clustering methods and showed improved performance. To validate our genomic taxonomy of NSCLC, we correlated the genomic classification with disease outcomes. Overall survival time and time to recurrence were shown to differ significantly between the genomic subtypes. Conclusions We developed an algorithm for cancer classification based on genome-wide patterns

  17. Atmospheric Correction Prototype Algorithm for High Spatial Resolution Multispectral Earth Observing Imaging Systems

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary

    2006-01-01

    This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.

  18. A fuzzy hill-climbing algorithm for the development of a compact associative classifier

    NASA Astrophysics Data System (ADS)

    Mitra, Soumyaroop; Lam, Sarah S.

    2012-02-01

    Classification, a data mining technique, has widespread applications including medical diagnosis, targeted marketing, and others. Knowledge discovery from databases in the form of association rules is one of the important data mining tasks. An integrated approach, classification based on association rules, has drawn the attention of the data mining community over the last decade. While attention has been mainly focused on increasing classifier accuracies, not much efforts have been devoted towards building interpretable and less complex models. This paper discusses the development of a compact associative classification model using a hill-climbing approach and fuzzy sets. The proposed methodology builds the rule-base by selecting rules which contribute towards increasing training accuracy, thus balancing classification accuracy with the number of classification association rules. The results indicated that the proposed associative classification model can achieve competitive accuracies on benchmark datasets with continuous attributes and lend better interpretability, when compared with other rule-based systems.

  19. Temporal high-pass non-uniformity correction algorithm based on grayscale mapping and hardware implementation

    NASA Astrophysics Data System (ADS)

    Jin, Minglei; Jin, Weiqi; Li, Yiyang; Li, Shuo

    2015-08-01

    In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms' poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.

  20. An Illumination Correction ALgorithm on Landsat-TM Data

    NASA Technical Reports Server (NTRS)

    Tan, Bin; Wolfe, Robert; Masek, Jeffrey; Gao, Feng; Vermote, Eric F.

    2010-01-01

    In this paper, a new illumination correction model, the rotation model, is introduced. The model is based on the empirical correlation between reflectance and the illumination condition (IL). The model eliminates the dependency of reflectance on IL through rotating the data in IL-reflectance space. This model is compared with widely used cosine model and C model over a sample forest region. We found that the newly developed rotation model consistently performs best on both atmospheric uncorrected and corrected Landsat images. Index Terms Landsat, illumination correction, change detection, LEDAPS

  1. Algorithms for Relative Radiometric Correction in Earth Observing Systems Resource-P and Canopus-V

    NASA Astrophysics Data System (ADS)

    Zenin, V. A.; Eremeev, V. V.; Kuznetcov, A. E.

    2016-06-01

    The present paper has considered two algorithms of the relative radiometric correction of information obtained from a multimatrix imagery instrument of the spacecraft "Resource-P" and frame imagery systems of the spacecraft "Canopus-V". The first algorithm is intended for elimination of vertical stripes on the image that are caused by difference in transfer characteristics of CCD matrices and CCD detectors. Correction coefficients are determined on the basis of analysis of images that are homogeneous by brightness. The second algorithm ensures an acquisition of microframes homogeneous by brightness from which seamless images of the Earth surface are synthesized. Examples of practical usage of the developed algorithms are mentioned.

  2. Nonuniformity correction algorithm based on Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Mou, Xin-gang; Zhang, Gui-lin; Hu, Ruo-lan; Zhou, Xiao

    2011-08-01

    As an important tool to acquire information of target scene, infrared detector is widely used in imaging guidance field. Because of the limit of material and technique, the performance of infrared imaging system is known to be strongly affected by the spatial nonuniformity in the photoresponse of the detectors in the array. Temporal highpass filter(THPF) is a popular adaptive NUC algorithm because of its simpleness and effectiveness. However, there still exists the problem of ghosting artifact in the algorithms caused by blind update of parameters, and the performance is noticeably degraded when the methods are applied over scenes with lack of motion. In order to tackle with this problem, a novel adaptive NUC algorithm based on Gaussian mixed model (GMM) is put forward according to traditional THPF. The drift of the detectors is assumed to obey a single Gaussian distribution, and the update of the parameters is selectively performed based on the scene. GMM is applied in the new algorithm for background modeling, in which the background is updated selectively so as to avoid the influence of the foreground target on the update of the background, thus eliminating the ghosting artifact. The performance of the proposed algorithm is evaluated with infrared image sequences with simulated and real fixed-pattern noise. The results show a more reliable fixed-pattern noise reduction, tracking the parameter drift, and presenting a good adaptability to scene changes.

  3. Correction and simulation of the intensity compensation algorithm used in curvature wavefront sensing

    NASA Astrophysics Data System (ADS)

    Wu, Zhi-Xu; Bai, Hua; Cui, Xiang-Qun

    2015-05-01

    The wavefront measuring range and recovery precision of a curvature sensor can be improved by an intensity compensation algorithm. However, in a focal system with a fast f-number, especially a telescope with a large field of view, the accuracy of this algorithm cannot meet the requirements. A theoretical analysis of the corrected intensity compensation algorithm in a focal system with a fast f-number is first introduced and afterwards the mathematical equations used in this algorithm are expressed. The corrected result is then verified through simulation. The method used by such a simulation can be described as follows. First, the curvature signal from a focal system with a fast f-number is simulated by Monte Carlo ray tracing; then the wavefront result is calculated by the inner loop of the FFT wavefront recovery algorithm and the outer loop of the intensity compensation algorithm. Upon comparing the intensity compensation algorithm of an ideal system with the corrected intensity compensation algorithm, we reveal that the recovered precision of the curvature sensor can be greatly improved by the corrected intensity compensation algorithm. Supported by the National Natural Science Foundation of China.

  4. Phase correction algorithms for a snapshot hyperspectral imaging system

    NASA Astrophysics Data System (ADS)

    Chan, Victoria C.; Kudenov, Michael; Dereniak, Eustace

    2015-09-01

    We present image processing algorithms that improve spatial and spectral resolution on the Snapshot Hyperspectral Imaging Fourier Transform (SHIFT) spectrometer. Final measurements are stored in the form of threedimensional datacubes containing the scene's spatial and spectral information. We discuss calibration procedures, review post-processing methods, and present preliminary results from proof-of-concept experiments.

  5. Performance of an advanced lump correction algorithm for gamma-ray assays of plutonium

    SciTech Connect

    Prettyman, T.H.; Sprinkle, J.K. Jr.; Sheppard, G.A.

    1994-08-01

    The results of an experimental study to evaluate the performance of an advanced lump correction algorithm for gamma-ray assays of plutonium is presented. The algorithm is applied to correct segmented gamma scanner (SGS) and tomographic gamma scanner (TGS) assays of plutonium samples in 55-gal. drums containing heterogeneous matrices. The relative ability of the SGS and TGS to separate matrix and lump effects is examined, and a technique to detect gross heterogeneity in SGS assays is presented.

  6. Weighted SVD algorithm for close-orbit correction and 10 Hz feedback in RHIC

    SciTech Connect

    Liu C.; Hulsart, R.; Marusic, A.; Michnoff, R.; Minty, M.; Ptitsyn, V.

    2012-05-20

    Measurements of the beam position along an accelerator are typically treated equally using standard SVD-based orbit correction algorithms so distributing the residual errors, modulo the local beta function, equally at the measurement locations. However, sometimes a more stable orbit at select locations is desirable. In this paper, we introduce an algorithm for weighting the beam position measurements to achieve a more stable local orbit. The results of its application to close-orbit correction and 10 Hz orbit feedback are presented.

  7. Algorithm-supported visual error correction (AVEC) of heart rate measurements in dogs, Canis lupus familiaris.

    PubMed

    Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt

    2015-12-01

    Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p < .001). After AVEC, the HR standard deviation and variability (HRV; i.e., RMSSD, pNN50, and SDNN) were significantly greater than after correction by the Polar tool (all p < .001). Furthermore, the HR data strings with deleted values seemed to be closer to the original data than were those with inserted means. We concluded that our method of error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data. PMID:25540125

  8. Self-Correcting HVAC Controls: Algorithms for Sensors and Dampers in Air-Handling Units

    SciTech Connect

    Fernandez, Nicholas; Brambley, Michael R.; Katipamula, Srinivas

    2009-12-31

    This report documents the self-correction algorithms developed in the Self-Correcting Heating, Ventilating and Air-Conditioning (HVAC) Controls project funded jointly by the Bonneville Power Administration and the Building Technologies Program of the U.S. Department of Energy. The algorithms address faults for temperature sensors, humidity sensors, and dampers in air-handling units and correction of persistent manual overrides of automated control systems. All faults considered create energy waste when left uncorrected as is frequently the case in actual systems.

  9. Algorithms research of airborne long linear multi-elements whisk broom remote sensing image geometric correction

    NASA Astrophysics Data System (ADS)

    Xu, Bin; Ma, Yan-hua; Li, Sheng-hong

    2015-10-01

    Multi-Element scanning imaging is an imaging method that is conventionally used in space-born spectrometer. By multipixel scanning at the same time, increased exposure time can be achieved and the picture quality can be enhanced. But when this imaging method is applied in airborne remote sensing image systems, corresponding imaging model and correction algorithms must be built, because of the poor posture stability of airborne platform and different characteristics and requirements. This paper builds a geometric correction model of airborne long linear multi-element scanning imaging system by decomposing the process of imaging and also deduced related correction algorithms. The sampling moment of linear CCD can be treated as push broom imaging and a single pixel imaging during the whole whisk broom period can be treated as whisk broom imaging. Based on this kind of decomposition, col-linearity equation correction algorithm and a kind of new tangent correction algorithm are deduced. As shown in the simulation experiment result, combining with position and attitude date collected by the posture position measurement system, these algorithms can map pixel position from image coordinate to WGS84 coordinate with high precision. In addition, some error factors and correction accuracy are roughly analyzed.

  10. The algorithm analysis on non-uniformity correction based on LMS adaptive filtering

    NASA Astrophysics Data System (ADS)

    Zhan, Dongjun; Wang, Qun; Wang, Chensheng; Chen, Huawang

    2010-11-01

    The traditional least mean square (LMS) algorithm has the performance of good adaptivity to noise, but there are several disadvantages in the traditional LMS algorithm, such as the defect in desired value of pending pixels, undetermined original coefficients, which result in slow convergence speed and long convergence period. Method to solve the desired value of pending pixel has improved based on these problems, also, the correction gain and offset coefficients worked out by the method of two-point temperature non-uniformity correction (NUC) as the original coefficients, which has improved the convergence speed. The simulation with real infrared images has proved that the new LMS algorithm has the advantages of better correction effect. Finally, the algorithm is implemented on the hardware structure of FPGA+DSP.

  11. Improved near-infrared ocean reflectance correction algorithm for satellite ocean color data processing.

    PubMed

    Jiang, Lide; Wang, Menghua

    2014-09-01

    A new approach for the near-infrared (NIR) ocean reflectance correction in atmospheric correction for satellite ocean color data processing in coastal and inland waters is proposed, which combines the advantages of the three existing NIR ocean reflectance correction algorithms, i.e., Bailey et al. (2010) [Opt. Express18, 7521 (2010)Appl. Opt.39, 897 (2000)Opt. Express20, 741 (2012)], and is named BMW. The normalized water-leaving radiance spectra nLw(λ) obtained from this new NIR-based atmospheric correction approach are evaluated against those obtained from the shortwave infrared (SWIR)-based atmospheric correction algorithm, as well as those from some existing NIR atmospheric correction algorithms based on several case studies. The scenes selected for case studies are obtained from two different satellite ocean color sensors, i.e., the Moderate Resolution Imaging Spectroradiometer (MODIS) on the satellite Aqua and the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP), with an emphasis on several turbid water regions in the world. The new approach has shown to produce nLw(λ) spectra most consistent with the SWIR results among all NIR algorithms. Furthermore, validations against the in situ measurements also show that in less turbid water regions the new approach produces reasonable and similar results comparable to the current operational algorithm. In addition, by combining the new NIR atmospheric correction with the SWIR-based approach, the new NIR-SWIR atmospheric correction can produce further improved ocean color products. The new NIR atmospheric correction can be implemented in a global operational satellite ocean color data processing system. PMID:25321543

  12. Comparison of atmospheric correction algorithms for the Coastal Zone Color Scanner

    NASA Technical Reports Server (NTRS)

    Tanis, F. J.; Jain, S. C.

    1984-01-01

    Before Nimbus-7 Costal Zone Color Scanner (CZC) data can be used to distinguish between coastal water types, methods must be developed for the removal of spatial variations in aerosol path radiance. These can dominate radiance measurements made by the satellite. An assessment is presently made of the ability of four different algorithms to quantitatively remove haze effects; each was adapted for the extraction of the required scene-dependent parameters during an initial pass through the data set The CZCS correction algorithms considered are (1) the Gordon (1981, 1983) algorithm; (2) the Smith and Wilson (1981) iterative algorityhm; (3) the pseudooptical depth method; and (4) the residual component algorithm.

  13. Approximate string matching algorithms for limited-vocabulary OCR output correction

    NASA Astrophysics Data System (ADS)

    Lasko, Thomas A.; Hauser, Susan E.

    2000-12-01

    Five methods for matching words mistranslated by optical character recognition to their most likely match in a reference dictionary were tested on data from the archives of the National Library of Medicine. The methods, including an adaptation of the cross correlation algorithm, the generic edit distance algorithm, the edit distance algorithm with a probabilistic substitution matrix, Bayesian analysis, and Bayesian analysis on an actively thinned reference dictionary were implemented and their accuracy rates compared. Of the five, the Bayesian algorithm produced the most correct matches (87%), and had the advantage of producing scores that have a useful and practical interpretation.

  14. An Algorithm to Atmospherically Correct Visible and Thermal Airborne Imagery

    NASA Technical Reports Server (NTRS)

    Rickman, Doug L.; Luvall, Jeffrey C.; Schiller, Stephen; Arnold, James E. (Technical Monitor)

    2000-01-01

    The program Watts implements a system of physically based models developed by the authors, described elsewhere, for the removal of atmospheric effects in multispectral imagery. The band range we treat covers the visible, near IR and the thermal IR. Input to the program begins with atmospheric pal red models specifying transmittance and path radiance. The system also requires the sensor's spectral response curves and knowledge of the scanner's geometric definition. Radiometric characterization of the sensor during data acquisition is also necessary. While the authors contend that active calibration is critical for serious analytical efforts, we recognize that most remote sensing systems, either airborne or space borne, do not as yet attain that minimal level of sophistication. Therefore, Watts will also use semi-active calibration where necessary and available. All of the input is then reduced to common terms, in terms of the physical units. From this it Is then practical to convert raw sensor readings into geophysically meaningful units. There are a large number of intricate details necessary to bring an algorithm or this type to fruition and to even use the program. Further, at this stage of development the authors are uncertain as to the optimal presentation or minimal analytical techniques which users of this type of software must have. Therefore, Watts permits users to break out and analyze the input in various ways. Implemented in REXX under OS/2 the program is designed with attention to the probability that it will be ported to other systems and other languages. Further, as it is in REXX, it is relatively simple for anyone that is literate in any computer language to open the code and modify to meet their needs. The authors have employed Watts in their research addressing precision agriculture and urban heat island.

  15. Currently Realizable Quantum Error Detection/Correction Algorithms for Superconducting Qubits

    NASA Astrophysics Data System (ADS)

    Keane, Kyle; Korotkov, Alexander N.

    2011-03-01

    We investigate the efficiency of simple quantum error correction/detection codes for zero-temperature energy relaxation. We show that standard repetitive codes are not effective for error correction of energy relaxation, but can be efficiently used for quantum error detection. Moreover, only two qubits are necessary for this purpose, in contrast to the minimum of three qubits needed for conventional error correction. We propose and analyze specific two-qubit algorithms for superconducting phase qubits, which are currently realizable and can demonstrate quantum error detection; each algorithm can also be used for quantum error correction of a specific known error. In particular, we analyze needed requirements on experimental parameters and calculate the expected fidelities for these experimental protocols. This work was supported by NSA and IARPA under ARO grant No. W911NF-10-1-0334.

  16. Reed-Solomon's algorithm and software for correcting errors in a text

    NASA Astrophysics Data System (ADS)

    Volivach, Oksana; Beletsky, Anatoly

    2011-10-01

    The purpose of this article is to describe the features, principles and process of encoding and decoding Reed-Solomon codes. The number and type of errors that can be corrected depends on the characteristics of Reed-Solomon codes. The paper is devoted to illustrate and to describe the ability of the working program to correct many errors in the text file. This program is developed by C++ programming language to test of applicability of developed algorithms. Thus, accuracy of programming algorithms has been tested. The article goes on to say about important of this program and to appreciate the necessity of digital encoding applications.

  17. Applications of a generalized pressure correction algorithm for flows in complicated geometries

    NASA Astrophysics Data System (ADS)

    Shyy, W.; Braaten, M. E.

    An overview is given of recent progress in developing a unified numerical algorithm capable of solving flow over a wide range of Mach and Reynolds numbers in complex geometries. The algorithm is based on the pressure correction method, combined treatment of the Cartesian and contravariant velocity components on arbitrary coordinates, and second-order accurate discretization. A number of two- and three-dimensional flow problems including the effects of electric currents, turbulence, combustion, multiple phases, and compressibility are presented to demonstrate the capability of the present algorithm. Some related technical issues, such as the skewness of the grid distribution and the promise of parallel computation, are also addressed.

  18. Assessment, Validation, and Refinement of the Atmospheric Correction Algorithm for the Ocean Color Sensors. Chapter 19

    NASA Technical Reports Server (NTRS)

    Wang, Menghua

    2003-01-01

    The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.

  19. Multiprocessing and Correction Algorithm of 3D-models for Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Anamova, R. R.; Zelenov, S. V.; Kuprikov, M. U.; Ripetskiy, A. V.

    2016-07-01

    This article addresses matters related to additive manufacturing preparation. A layer-by-layer model presentation was developed on the basis of a routing method. Methods for correction of errors in the layer-by-layer model presentation were developed. A multiprocessing algorithm for forming an additive manufacturing batch file was realized.

  20. [Validation and analysis of water column correction algorithm at Sanya Bay].

    PubMed

    Yang, Chao-yu; Yang, Ding-tian; Ye, Hai-bin; Cao, Wen-xi

    2011-07-01

    Water column correction has been a substantial challenge for remote sensing. In order to improve the accuracy of coastal ocean monitoring where optical properties are complex, optical property of shallow water at Sanya Bay and the suitable water column correction algorithms were studies in the present paper. The authors extracted the bottom reflectance without water column effects by using a water column correction algorithm which is based on the simulation of the underwater light field in idealized water. And we compared the results which were calculated by the model and Christian's model respectively. Based on a detailed analysis, we concluded that: Because the optical properties of Sanya Bay are complex and vary greatly with location, Christian's model lost its advantage in the area. Conversely, the bottom reflectance calculating by the algorithm based on the simulation of the underwater light field in idealized water agreed well with in situ measured bottom reflectance, although the reflectance was lower than in situ measured reflectance value between 400 and 500 nm. So, it is reasonable to extract bottom information by using the water column correction algorithm in local bay area where optical properties are complex. PMID:21942050

  1. Verification of the ASTER/TIR atmospheric correction algorithm based on water surface emissivity retrieved

    NASA Astrophysics Data System (ADS)

    Tonooka, Hideyuki; Palluconi, Frank D.

    2002-02-01

    The standard atmospheric correction algorithm for five thermal infrared (TIR) bands of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is currently based on radiative transfer computations with global assimilation data on a pixel-by-pixel basis. In the present paper, we verify this algorithm using 100 ASTER scenes globally acquired during the early mission period. In this verification, the max-min difference (MMD) of the water surface emissivity retrieved from each scene is used as an atmospheric correction error index, since the water surface emissivity is well known; if the MMD retrieved is large, an atmospheric correction error also will be possibly large. As the results, the error of the MMD retrieved by the standard atmospheric correction algorithm and a typical temperature/emissivity separation algorithm is shown to be remarkably related with precipitable water vapor, latitude, elevation, and surface temperature. It is also mentioned that the expected error on the MMD retrieved is 0.05 for the precipitable water vapor of 3 cm.

  2. Station Correction Uncertainty in Multiple Event Location Algorithms and the Effect on Error Ellipses

    SciTech Connect

    Erickson, Jason P.; Carlson, Deborah K.; Ortiz, Anne; Hutchenson, Kevin; Oweisny, Linda; Kraft, Gordon; Anderson, Dale N.; Tinker, Mark

    2003-10-30

    Accurate location of seismic events is crucial for nuclear explosion monitoring. There are several sources of error in seismic location that must be taken into account to obtain high confidence results. Most location techniques account for uncertainties in the phase arrival times (measurement error) and the bias of the velocity model (model error), but they do not account for the uncertainty of the velocity model bias. By determining and incorporating this uncertainty in the location algorithm we seek to improve the accuracy of the calculated locations and uncertainty ellipses. In order to correct for deficiencies in the velocity model, it is necessary to apply station specific corrections to the predicted arrival times. Both master event and multiple event location techniques assume that the station corrections are known perfectly, when in reality there is an uncertainty associated with these corrections. For multiple event location algorithms that calculate station corrections as part of the inversion, it is possible to determine the variance of the corrections. The variance can then be used to weight the arrivals associated with each station, thereby giving more influence to stations with consistent corrections. We have modified an existing multiple event location program (based on PMEL, Pavlis and Booker, 1983). We are exploring weighting arrivals with the inverse of the station correction standard deviation as well using the conditional probability of the calculated station corrections. This is in addition to the weighting already given to the measurement and modeling error terms. We re-locate a group of mining explosions that occurred at Black Thunder, Wyoming, and compare the results to those generated without accounting for station correction uncertainty.

  3. Distortion correction algorithm for UAV remote sensing image based on CUDA

    NASA Astrophysics Data System (ADS)

    Wenhao, Zhang; Yingcheng, Li; Delong, Li; Changsheng, Teng; Jin, Liu

    2014-03-01

    In China, natural disasters are characterized by wide distribution, severe destruction and high impact range, and they cause significant property damage and casualties every year. Following a disaster, timely and accurate acquisition of geospatial information can provide an important basis for disaster assessment, emergency relief, and reconstruction. In recent years, Unmanned Aerial Vehicle (UAV) remote sensing systems have played an important role in major natural disasters, with UAVs becoming an important technique of obtaining disaster information. UAV is equipped with a non-metric digital camera with lens distortion, resulting in larger geometric deformation for acquired images, and affecting the accuracy of subsequent processing. The slow speed of the traditional CPU-based distortion correction algorithm cannot meet the requirements of disaster emergencies. Therefore, we propose a Compute Unified Device Architecture (CUDA)-based image distortion correction algorithm for UAV remote sensing, which takes advantage of the powerful parallel processing capability of the GPU, greatly improving the efficiency of distortion correction. Our experiments show that, compared with traditional CPU algorithms and regardless of image loading and saving times, the maximum acceleration ratio using our proposed algorithm reaches 58 times that using the traditional algorithm. Thus, data processing time can be reduced by one to two hours, thereby considerably improving disaster emergency response capability.

  4. FORTRAN algorithm for correcting normal resistivity logs for borehold diameter and mud resistivity

    SciTech Connect

    Scott, J H

    1983-01-01

    The FORTRAN algorithm described was developed for applying corrections to normal resistivity logs of any electrode spacing for the effects of drilling mud of known resistivity in boreholes of variable diameter. The corrections are based on Schlumberger departure curves that are applicable to normal logs made with a standard Schlumberger electric logging probe with an electrode diameter of 8.5 cm (3.35 in). The FORTRAN algorithm has been generalized to accommodate logs made with other probes with different electrode diameters. Two simplifying assumptions used by Schlumberger in developing the departure curves also apply to the algorithm: (1) bed thickness is assumed to be infinite (at least 10 times larger than the electrode spacing), and (2) invasion of drilling mud into the formation is assumed to be negligible.

  5. A FORTRAN algorithm for correcting normal resistivity logs for borehole diameter and mud resistivity

    USGS Publications Warehouse

    Scott, James Henry

    1978-01-01

    The FORTRAN algorithm described in this report was developed for applying corrections to normal resistivity logs of any electrode spacing for the effects of drilling mud of known resistivity in boreholes of variable diameter. The corrections are based on Schlumberger departure curves that are applicable to normal logs made with a standard Schlumberger electric logging probe with an electrode diameter of 8.5 cm (3.35 in). The FORTRAN algorithm has been generalized to accommodate logs made with other probes with different electrode diameters. Two simplifying assumptions used by Schlumberger in developing the departure curves also apply to the algorithm: (1) bed thickness is assumed to be infinite (at least 10 times larger than the electrode spacing), and (2) invasion of drilling mud into the formation is assumed to be negligible. * The use of a trade name does not necessarily constitute endorsement by the U.S. Geological Survey.

  6. A scene-based nonuniformity correction algorithm based on fuzzy logic

    NASA Astrophysics Data System (ADS)

    Huang, Jun; Ma, Yong; Fan, Fan; Mei, Xiaoguang; Liu, Zhe

    2015-08-01

    Scene-based nonuniformity correction algorithms based on the LMS adaptive filter are quite efficient to reduce the fixed pattern noise in infrared images. They are famous for their low cost of computation and storage recourses. Unfortunately, ghosting artifacts can be easily introduced in edge areas when the inter-frame motion slows. In this paper, a gated scene-based nonuniformity correction algorithm is proposed. A novel low-pass filter based on the fuzzy logic is proposed to estimate the true scene radiation as the desired signal in the LMS adaptive filter. The fuzzy logic can also evaluate the probability that a pixel and its locals belong to edge areas. Then the update of the correction parameters for the pixels in edge areas can be gated. The experiment results show that our method is reliable and the ghosting artifacts are reduced.

  7. A correction-based dose calculation algorithm for kilovoltage x rays

    SciTech Connect

    Ding, George X.; Pawlowski, Jason M.; Coffey, Charles W.

    2008-12-15

    Frequent and repeated imaging procedures such as those performed in image-guided radiotherapy (IGRT) programs may add significant dose to radiosensitive organs of radiotherapy patients. It has been shown that kV-CBCT results in doses to bone that are up to a factor of 3-4 higher than those in surrounding soft tissue. Imaging guidance procedures are necessary due to their potential benefits, but the additional incremental dose per treatment fraction may exceed an individual organ tolerance. Hence it is important to manage and account for this additional dose from imaging for radiotherapy patients. Currently available model-based dose calculation methods in radiation treatment planning (RTP) systems are not suitable for low-energy x rays, and new and fast calculation algorithms are needed for a RTP system for kilovoltage dose computations. This study presents a new dose calculation algorithm, referred to as the medium-dependent-correction (MDC) algorithm, for accurate patient dose calculation resulting from kilovoltage x rays. The accuracy of the new algorithm is validated against Monte Carlo calculations. The new algorithm overcomes the deficiency of existing density correction based algorithms in dose calculations for inhomogeneous media, especially for CT-based human volumetric images used in radiotherapy treatment planning.

  8. Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms

    DOE PAGESBeta

    Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.

    2016-04-12

    Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV), downwellingmore » radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities – done for clear-sky scenes – use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes

  9. Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms

    DOE PAGESBeta

    Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.

    2016-04-12

    Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV),more » downwelling radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities – done for clear-sky scenes – use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes. Lastly, the

  10. Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms

    NASA Astrophysics Data System (ADS)

    Dzambo, A. M.; Turner, D. D.; Mlawer, E. J.

    2015-10-01

    Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV), downwelling radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km MSL), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, mid-latitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities - done for clear-sky scenes - use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RH are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes. The cause of this statistical

  11. Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms

    NASA Astrophysics Data System (ADS)

    Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.

    2016-04-01

    Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV), downwelling radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities - done for clear-sky scenes - use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes. The cause of this

  12. The Design of Flux-Corrected Transport (FCT) Algorithms For Structured Grids

    NASA Astrophysics Data System (ADS)

    Zalesak, Steven T.

    A given flux-corrected transport (FCT) algorithm consists of three components: 1) a high order algorithm to which it reduces in smooth parts of the flow; 2) a low order algorithm to which it reduces in parts of the flow devoid of smoothness; and 3) a flux limiter which calculates the weights assigned to the high and low order fluxes in various regions of the flow field. One way of optimizing an FCT algorithm is to optimize each of these three components individually. We present some of the ideas that have been developed over the past 30 years toward this end. These include the use of very high order spatial operators in the design of the high order fluxes, non-clipping flux limiters, the appropriate choice of constraint variables in the critical flux-limiting step, and the implementation of a "failsafe" flux-limiting strategy.

  13. The design of flux-corrected transport (FCT) algorithms on structured grids

    NASA Astrophysics Data System (ADS)

    Zalesak, Steven T.

    2005-12-01

    A given flux-corrected transport (FCT) algorithm consists of three components: (1) a high order algorithm to which it reduces in smooth parts of the flow field; (2) a low order algorithm to which it reduces in parts of the flow devoid of smoothness; and (3) a flux limiter which calculates the weights assigned to the high and low order algorithms, in flux form, in the various regions of the flow field. In this dissertation, we describe a set of design principles that significantly enhance the accuracy and robustness of FCT algorithms by enhancing the accuracy and robustness of each of the three components individually. These principles include the use of very high order spatial operators in the design of the high order fluxes, the use of non-clipping flux limiters, the appropriate choice of constraint variables in the critical flux-limiting step, and the implementation of a "failsafe" flux-limiting strategy. We show via standard test problems the kind of algorithm performance one can expect if these design principles are adhered to. We give examples of applications of these design principles in several areas of physics. Finally, we compare the performance of these enhanced algorithms with that of other recent front-capturing methods.

  14. Experimental Evaluation of a Deformable Registration Algorithm for Motion Correction in PET-CT Guided Biopsy

    PubMed Central

    Khare, Rahul; Sala, Guillaume; Kinahan, Paul; Esposito, Giuseppe; Banovac, Filip; Cleary, Kevin; Enquobahrie, Andinet

    2015-01-01

    Positron emission tomography computed tomography (PET-CT) images are increasingly being used for guidance during percutaneous biopsy. However, due to the physics of image acquisition, PET-CT images are susceptible to problems due to respiratory and cardiac motion, leading to inaccurate tumor localization, shape distortion, and attenuation correction. To address these problems, we present a method for motion correction that relies on respiratory gated CT images aligned using a deformable registration algorithm. In this work, we use two deformable registration algorithms and two optimization approaches for registering the CT images obtained over the respiratory cycle. The two algorithms are the BSpline and the symmetric forces Demons registration. In the first optmization approach, CT images at each time point are registered to a single reference time point. In the second approach, deformation maps are obtained to align each CT time point with its adjacent time point. These deformations are then composed to find the deformation with respect to a reference time point. We evaluate these two algorithms and optimization approaches using respiratory gated CT images obtained from 7 patients. Our results show that overall the BSpline registration algorithm with the reference optimization approach gives the best results. PMID:25717283

  15. A Comparative Dosimetric Analysis of the Effect of Heterogeneity Corrections Used in Three Treatment Planning Algorithms

    NASA Astrophysics Data System (ADS)

    Herrick, Andrea Celeste

    Successful treatment in radiation oncology relies on the evaluation of a plan for each individual patient based on delivering the maximum dose to the tumor while sparing the surrounding normal tissue (organs at risk) in the patient. Organs at risk (OAR) typically considered include the heart, the spinal cord, healthy lung tissue, and any other organ in the vicinity of the target that is not affected by the disease being treated. Depending on the location of the tumor and its proximity to these OARs, several plans may be created and evaluated in order to assess which "solution" most closely meets all of the specified criteria. In order to successfully review a treatment plan and take the correct course of action, a physician needs to rely on the computer model (treatment planning algorithm) of dose distribution to reconstruct CT scan data to proceed with the plan that best achieves all of the goals. There are many available treatment planning systems from which a Radiation Oncology center can choose from. While the radiation interactions considered are identical among clinics, the way the chosen algorithm handles these interactions can vary immensely. The goal of this study was to provide a comparison between two commonly used treatment planning systems (Pinnacle and Eclipse) and their associated dose calculation algorithms. In order to this, heterogeneity correction models were evaluated via test plans, and the effects of going from heterogeneity uncorrected patient representation to a heterogeneity correction representation were studied. The results of this study indicate that the actual dose delivered to the patient varies greatly between treatment planning algorithms in areas of low density tissue such as in the lungs. Although treatment planning algorithms are attempting to come to the same result with heterogeneity corrections, the reality is that the results depend strongly on the algorithm used in the situations studied. While the Anisotropic Analytic Method

  16. How far can we push quantum variational algorithms without error correction?

    NASA Astrophysics Data System (ADS)

    Babbush, Ryan

    Recent work has shown that parameterized short quantum circuits can generate powerful variational ansatze for ground states of classically intractable fermionic models. This talk will present numerical and experimental evidence that quantum variational algorithms are also robust to certain errors which plague the gate model. As the number of qubits in superconducting devices keeps increasing, their dynamics are becoming prohibitively expensive to simulate classically. Accordingly, our observations should inspire hope that quantum computers could provide useful insight into important problems in the near future. This talk will conclude by discussing future research directions which could elucidate the viability of executing quantum variational algorithms on classically intractable problems without error correction.

  17. Empirical evaluation of bias field correction algorithms for computer-aided detection of prostate cancer on T2w MRI

    NASA Astrophysics Data System (ADS)

    Viswanath, Satish; Palumbo, Daniel; Chappelow, Jonathan; Patel, Pratik; Bloch, B. Nicholas; Rofsky, Neil; Lenkinski, Robert; Genega, Elizabeth; Madabhushi, Anant

    2011-03-01

    In magnetic resonance imaging (MRI), intensity inhomogeneity refers to an acquisition artifact which introduces a non-linear variation in the signal intensities within the image. Intensity inhomogeneity is known to significantly affect computerized analysis of MRI data (such as automated segmentation or classification procedures), hence requiring the application of bias field correction (BFC) algorithms to account for this artifact. Quantitative evaluation of BFC schemes is typically performed using generalized intensity-based measures (percent coefficient of variation, %CV ) or information-theoretic measures (entropy). While some investigators have previously empirically compared BFC schemes in the context of different domains (using changes in %CV and entropy to quantify improvements), no consensus has emerged as to the best BFC scheme for any given application. The motivation for this work is that the choice of a BFC scheme for a given application should be dictated by application-specific measures rather than ad hoc measures such as entropy and %CV. In this paper, we have attempted to address the problem of determining an optimal BFC algorithm in the context of a computer-aided diagnosis (CAD) scheme for prostate cancer (CaP) detection from T2-weighted (T2w) MRI. One goal of this work is to identify a BFC algorithm that will maximize the CaP classification accuracy (measured in terms of the area under the ROC curve or AUC). A secondary aim of our work is to determine whether measures such as %CV and entropy are correlated with a classifier-based objective measure (AUC). Determining the presence or absence of these correlations is important to understand whether domain independent BFC performance measures such as %CV , entropy should be used to identify the optimal BFC scheme for any given application. In order to answer these questions, we quantitatively compared 3 different popular BFC algorithms on a cohort of 10 clinical 3 Tesla prostate T2w MRI datasets

  18. Correction factor for ablation algorithms used in corneal refractive surgery with gaussian-profile beams

    NASA Astrophysics Data System (ADS)

    Jimenez, Jose Ramón; González Anera, Rosario; Jiménez del Barco, Luis; Hita, Enrique; Pérez-Ocón, Francisco

    2005-01-01

    We provide a correction factor to be added in ablation algorithms when a Gaussian beam is used in photorefractive laser surgery. This factor, which quantifies the effect of pulse overlapping, depends on beam radius and spot size. We also deduce the expected post-surgical corneal radius and asphericity when considering this factor. Data on 141 eyes operated on LASIK (laser in situ keratomileusis) with a Gaussian profile show that the discrepancy between experimental and expected data on corneal power is significantly lower when using the correction factor. For an effective improvement of post-surgical visual quality, this factor should be applied in ablation algorithms that do not consider the effects of pulse overlapping with a Gaussian beam.

  19. Closed loop, DM diversity-based, wavefront correction algorithm for high contrast imaging systems.

    PubMed

    Give'on, Amir; Belikov, Ruslan; Shaklan, Stuart; Kasdin, Jeremy

    2007-09-17

    High contrast imaging from space relies on coronagraphs to limit diffraction and a wavefront control systems to compensate for imperfections in both the telescope optics and the coronagraph. The extreme contrast required (up to 10(-10) for terrestrial planets) puts severe requirements on the wavefront control system, as the achievable contrast is limited by the quality of the wavefront. This paper presents a general closed loop correction algorithm for high contrast imaging coronagraphs by minimizing the energy in a predefined region in the image where terrestrial planets could be found. The estimation part of the algorithm reconstructs the complex field in the image plane using phase diversity caused by the deformable mirror. This method has been shown to achieve faster and better correction than classical speckle nulling. PMID:19547602

  20. Performance evaluation of operational atmospheric correction algorithms over the East China Seas

    NASA Astrophysics Data System (ADS)

    He, Shuangyan; He, Mingxia; Fischer, Jürgen

    2016-04-01

    To acquire high-quality operational data products for Chinese in-orbit and scheduled ocean color sensors, the performances of two operational atmospheric correction (AC) algorithms (ESA MEGS 7.4.1 and NASA SeaDAS 6.1) were evaluated over the East China Seas (ECS) using MERIS data. The spectral remote sensing reflectance R rs(λ), aerosol optical thickness (AOT), and Ångström exponent (α) retrieved using the two algorithms were validated using in situ measurements obtained between May 2002 and October 2009. Match-ups of R rs, AOT, and α between the in situ and MERIS data were obtained through strict exclusion criteria. Statistical analysis of R rs(λ) showed a mean percentage difference (MPD) of 9%-13% in the 490-560 nm spectral range, and significant overestimation was observed at 413 nm (MPD>72%). The AOTs were overestimated (MPD>32%), and although the ESA algorithm outperformed the NASA algorithm in the blue-green bands, the situation was reversed in the red-near-infrared bands. The value of α was obviously underestimated by the ESA algorithm (MPD=41%) but not by the NASA algorithm (MPD=35%). To clarify why the NASA algorithm performed better in the retrieval of α, scatter plots of the α single scattering albedo (SSA) density were prepared. These α-SSA density scatter plots showed that the applicability of the aerosol models used by the NASA algorithm over the ECS is better than that used by the ESA algorithm, although neither aerosol model is suitable for the ECS region. The results of this study provide a reference to both data users and data agencies regarding the use of operational data products and the investigation into the improvement of current AC schemes over the ECS.

  1. A simplified algorithm for correcting both errors and erasures of R-S codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1978-01-01

    Using the finite field transform and continued fractions, a simplified algorithm for decoding Reed-Solomon (R-S) codes is developed to correct erasures caused by other codes as well as errors over the finite field GF (q(m), where q is a prime and m is an integer. Such an R-S decoder can be faster and simpler than a decoder that uses more conventional methods.

  2. Simplified algorithm for correcting both errors and erasures of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.; Miller, R. L.

    1979-01-01

    Using a finite-field transform, a simplified algorithm for decoding Reed-Solomon codes is developed to correct erasures as well as errors over the finite-field GF(q to the m power), where q is a prime and m is an integer. If the finite-field transform is a fast transform, this decoder can be faster and simpler than a decoder that uses more conventional methods.

  3. Evaluation and Analysis of Seasat a Scanning Multichannel Microwave Radiometer (SMMR) Antenna Pattern Correction (APC) Algorithm

    NASA Technical Reports Server (NTRS)

    Kitzis, S. N.; Kitzis, J. L.

    1979-01-01

    The accuracy of the SEASAT-A SMMR antenna pattern correction (APC) algorithm was assessed. Interim APC brightness temperature measurements for the SMMR 6.6 GHz channels are compared with surface truth derived sea surface temperatures. Plots and associated statistics are presented for SEASAT-A SMMR data acquired for the Gulf of Alaska experiment. The cross-track gradients observed in the 6.6 GHz brightness temperature data are discussed.

  4. [A quickly atmospheric correction method for HJ-1 CCD with deep blue algorithm].

    PubMed

    Wang, Zhong-Ting; Wang, Hong-Mei; Li, Qing; Zhao, Shao-Hua; Li, Shen-Shen; Chen, Liang-Fu

    2014-03-01

    In the present, for the characteristic of HJ-1 CCD camera, after receiving aerosol optical depth (AOD) from deep blue algorithm which was developed by Hsu et al. assisted by MODerate-resolution imaging spectroradiometer (MODIS) surface reflectance database, bidirectional reflectance distribution function (BRDF) correction with Kernel-Driven Model, and the calculation of viewing geometry with auxiliary data, a new atmospheric correction method of HJ-1 CCD was developed which can be used over vegetation, soil and so on. And, when the CCD data is processed to correct atmospheric influence, with look up table (LUT) and bilinear interpolation, atmospheric correction of HJ-1 CCD is completed quickly by grid calculation of atmospheric parameters and matrix operations of interface define language (IDL). The experiment over China North Plain on July 3rd, 2012 shows that by our method, the atmospheric influence was corrected well and quickly (one CCD image of 1 GB can be corrected in eight minutes), and the reflectance after correction over vegetation and soil was close to the spectrum of vegetation and soil. The comparison with MODIS reflectance product shows that for the advantage of high resolution, the corrected reflectance image of HJ-1 is finer than that of MODIS, and the correlation coefficient of the reflectance over typical surface is greater than 0.9. Error analysis shows that the recognition error of aerosol type leads to 0. 05 absolute error of surface reflectance in near infrared band, which is larger than that in visual bands, and the 0. 02 error of reflectance database leads to 0.01 absolute error of surface reflectance of atmospheric correction in green and red bands. PMID:25208402

  5. Intensity normalization and automatic gain control correction of airborne LiDAR data for classifying a rangeland ecosystem

    NASA Astrophysics Data System (ADS)

    Shrestha, R.; Glenn, N. F.; Spaete, L.; Mitchell, J.

    2011-12-01

    Airborne LiDAR not only records elevation but also the intensity, or the amplitude, of the returning light beam. LiDAR intensity information can be useful for many applications, including landcover classification. Intensity is directly associated with the reflectance of the target surface and can be influenced by factors such as flying altitude and sensor settings. LiDAR intensity data must be calibrated before use and this is especially important for multi-temporal studies where differing flight conditions can cause more variations. Some sensors such as the Leica ALS50 Phase II also records automatic gain control (AGC), which controls the gain of the LiDAR signal, allowing information from low-reflectance surfaces. We demonstrate a post-processing method for calibrating intensity using airborne LiDAR data collected over a sage-steppe ecosystem in southeastern Idaho, USA. Range normalization with respect to the sensor-to-object distance is performed by using smoothed best estimated trajectory information collected at an interval of every second. Optimal parameters for calibrating AGC data are determined by collecting spectral reference data at the time of overflights, in test areas with homogenous backscatter properties. Intensity calibration results are compared with vendor corrected intensity data, and used to perform landcover classification using the Random Forests method. We also test this intensity calibration approach using a separate multi-temporal LiDAR data set collected by the same sensor.

  6. Cardamine occulta, the correct species name for invasive Asian plants previously classified as C. flexuosa, and its occurrence in Europe

    PubMed Central

    Marhold, Karol; Šlenker, Marek; Kudoh, Hiroshi; Zozomová-Lihová, Judita

    2016-01-01

    Abstract The nomenclature of Eastern Asian populations traditionally assigned to Cardamine flexuosa has remained unresolved since 2006, when they were found to be distinct from the European species Cardamine flexuosa. Apart from the informal designation “Asian Cardamine flexuosa”, this taxon has also been reported under the names Cardamine flexuosa subsp. debilis or Cardamine hamiltonii. Here we determine its correct species name to be Cardamine occulta and present a nomenclatural survey of all relevant species names. A lectotype and epitype for Cardamine occulta and a neotype for the illegitimate name Cardamine debilis (replaced by Cardamine flexuosa subsp. debilis and Cardamine hamiltonii) are designated here. Cardamine occulta is a polyploid weed that most likely originated in Eastern Asia, but it has also been introduced to other continents, including Europe. Here data is presented on the first records of this invasive species in European countries. The first known record for Europe was made in Spain in 1993, and since then its occurrence has been reported from a number of European countries and regions as growing in irrigated anthropogenic habitats, such as paddy fields or flower beds, and exceptionally also in natural communities such as lake shores. PMID:27212882

  7. The Design of Flux-Corrected Transport (FCT) Algorithms for Structured Grids

    NASA Astrophysics Data System (ADS)

    Zalesak, Steven T.

    A given flux-corrected transport (FCT) algorithm consists of three components: (1) a high order algorithm to which it reduces in smooth parts of the flow; (2) a low order algorithm to which it reduces in parts of the flow devoid of smoothness; and (3) a flux limiter which calculates the weights assigned to the high and low order fluxes in various regions of the flow field. One way of optimizing an FCT algorithm is to optimize each of these three components individually. We present some of the ideas that have been developed over the past 30 years toward this end. These include the use of very high order spatial operators in the design of the high order fluxes, non-clipping flux limiters, the appropriate choice of constraint variables in the critical flux-limiting step, and the implementation of a "failsafe" flux-limiting strategy. This chapter confines itself to the design of FCT algorithms for structured grids, using a finite volume formalism, for this is the area with which the present author is most familiar. The reader will find excellent material on the design of FCT algorithms for unstructured grids, using both finite volume and finite element formalisms, in the chapters by Professors Löhner, Baum, Kuzmin, Turek, and Möller in the present volume.

  8. Reconstruction algorithm for polychromatic CT imaging: application to beam hardening correction

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    2000-01-01

    This paper presents a new reconstruction algorithm for both single- and dual-energy computed tomography (CT) imaging. By incorporating the polychromatic characteristics of the X-ray beam into the reconstruction process, the algorithm is capable of eliminating beam hardening artifacts. The single energy version of the algorithm assumes that each voxel in the scan field can be expressed as a mixture of two known substances, for example, a mixture of trabecular bone and marrow, or a mixture of fat and flesh. These assumptions are easily satisfied in a quantitative computed tomography (QCT) setting. We have compared our algorithm to three commonly used single-energy correction techniques. Experimental results show that our algorithm is much more robust and accurate. We have also shown that QCT measurements obtained using our algorithm are five times more accurate than that from current QCT systems (using calibration). The dual-energy mode does not require any prior knowledge of the object in the scan field, and can be used to estimate the attenuation coefficient function of unknown materials. We have tested the dual-energy setup to obtain an accurate estimate for the attenuation coefficient function of K2 HPO4 solution.

  9. A Fast Overlapping Community Detection Algorithm with Self-Correcting Ability

    PubMed Central

    Lu, Nan

    2014-01-01

    Due to the defects of all kinds of modularity, this paper defines a weighted modularity based on the density and cohesion as the new evaluation measurement. Since the proportion of the overlapping nodes in network is very low, the number of the nodes' repeat visits can be reduced by signing the vertices with the overlapping attributes. In this paper, we propose three test conditions for overlapping nodes and present a fast overlapping community detection algorithm with self-correcting ability, which is decomposed into two processes. Under the control of overlapping properties, the complexity of the algorithm tends to be approximate linear. And we also give a new understanding on membership vector. Moreover, we improve the bridgeness function which evaluates the extent of overlapping nodes. Finally, we conduct the experiments on three networks with well known community structures and the results verify the feasibility and effectiveness of our algorithm. PMID:24757434

  10. A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong

    2015-11-01

    The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method.

  11. Correcting encoder interpolation error on the Green Bank Telescope using an iterative model based identification algorithm

    NASA Astrophysics Data System (ADS)

    Franke, Timothy; Weadon, Tim; Ford, John; Garcia-Sanz, Mario

    2015-10-01

    Various forms of measurement errors limit telescope tracking performance in practice. A new method for identifying the correcting coefficients for encoder interpolation error is developed. The algorithm corrects the encoder measurement by identifying a harmonic model of the system and using that model to compute the necessary correction parameters. The approach improves upon others by explicitly modeling the unknown dynamics of the structure and controller and by not requiring a separate system identification to be performed. Experience gained from pin-pointing the source of encoder error on the Green Bank Radio Telescope (GBT) is presented. Several tell-tale indicators of encoder error are discussed. Experimental data from the telescope, tested with two different encoders, are presented. Demonstration of the identification methodology on the GBT as well as details of its implementation are discussed. A root mean square tracking error reduction from 0.68 arc seconds to 0.21 arc sec was achieved by changing encoders and was further reduced to 0.10 arc sec with the calibration algorithm. In particular, the ubiquity of this error source is shown and how, by careful correction, it is possible to go beyond the advertised accuracy of an encoder.

  12. Alignment algorithms and per-particle CTF correction for single particle cryo-electron tomography.

    PubMed

    Galaz-Montoya, Jesús G; Hecksel, Corey W; Baldwin, Philip R; Wang, Eryu; Weaver, Scott C; Schmid, Michael F; Ludtke, Steven J; Chiu, Wah

    2016-06-01

    Single particle cryo-electron tomography (cryoSPT) extracts features from cryo-electron tomograms, followed by 3D classification, alignment and averaging to generate improved 3D density maps of such features. Robust methods to correct for the contrast transfer function (CTF) of the electron microscope are necessary for cryoSPT to reach its resolution potential. Many factors can make CTF correction for cryoSPT challenging, such as lack of eucentricity of the specimen stage, inherent low dose per image, specimen charging, beam-induced specimen motions, and defocus gradients resulting both from specimen tilting and from unpredictable ice thickness variations. Current CTF correction methods for cryoET make at least one of the following assumptions: that the defocus at the center of the image is the same across the images of a tiltseries, that the particles all lie at the same Z-height in the embedding ice, and/or that the specimen, the cryo-electron microscopy (cryoEM) grid and/or the carbon support are flat. These experimental conditions are not always met. We have developed a CTF correction algorithm for cryoSPT without making any of the aforementioned assumptions. We also introduce speed and accuracy improvements and a higher degree of automation to the subtomogram averaging algorithms available in EMAN2. Using motion-corrected images of isolated virus particles as a benchmark specimen, recorded with a DE20 direct detection camera, we show that our CTF correction and subtomogram alignment routines can yield subtomogram averages close to 4/5 Nyquist frequency of the detector under our experimental conditions. PMID:27016284

  13. An Algorithm for Correcting CTE Loss in Spectrophotometry of Point Sources with the STIS CCD

    NASA Astrophysics Data System (ADS)

    Bohlin, Ralph; Goudfrooij, Paul

    2003-08-01

    The correction for the change in sensitivity with time for the STIS CCD modes is complicated by the gradual loss of charge transfer efficiency (CTE) of the CCD. The amount of this CTE loss depends on time in orbit, the location on the CCD chip with respect to the readout amplifier, the stellar signal strength, and the background level. Primary constraints on our correction algorithm are provided by measurements of the CTE loss rates for simulated spectra (tungsten lamp images taken through slits oriented along the dispersion axis) combined with estimates of CTE losses for actual stellar spectra in the first order CCD modes. The main complication is the quantification of the roll-off of the CTE losses for weak stellar signals on non-zero backgrounds. This roll-off term is determined by relatively short exposures of primary standard stars along with the G750L series of properly exposed AGK+81D266 monitoring data, where the observed changes in response over time are primarily CTE losses and not sensitivity degradations. After accounting for CTE losses and after an iterative determination of the optical system throughput losses, the CTE correction algorithm is verified by comparing G230L MAMA fluxes of faint standard stars with G430L fluxes in the overlap region around 3000Å. For spectra at the standard reference position at the CCD center, CTE losses as big as 20% are corrected to within 1% at high signal levels and with a precision of ~2% at ~100 electrons after application of the algorithm presented here.

  14. A multi-characteristic based algorithm for classifying vegetation in a plateau area: Qinghai Lake watershed, northwestern China

    NASA Astrophysics Data System (ADS)

    Ma, Weiwei; Gong, Cailan; Hu, Yong; Li, Long; Meng, Peng

    2015-10-01

    Remote sensing technology has been broadly recognized for its convenience and efficiency in mapping vegetation, particularly in high-altitude and inaccessible areas where there are lack of in-situ observations. In this study, Landsat Thematic Mapper (TM) images and Chinese environmental mitigation satellite CCD sensor (HJ-1 CCD) images, both of which are at 30m spatial resolution were employed for identifying and monitoring of vegetation types in a area of Western China——Qinghai Lake Watershed(QHLW). A decision classification tree (DCT) algorithm using multi-characteristic including seasonal TM/HJ-1 CCD time series data combined with digital elevation models (DEMs) dataset, and a supervised maximum likelihood classification (MLC) algorithm with single-data TM image were applied vegetation classification. Accuracy of the two algorithms was assessed using field observation data. Based on produced vegetation classification maps, it was found that the DCT using multi-season data and geomorphologic parameters was superior to the MLC algorithm using single-data image, improving the overall accuracy by 11.86% at second class level and significantly reducing the "salt and pepper" noise. The DCT algorithm applied to TM /HJ-1 CCD time series data geomorphologic parameters appeared as a valuable and reliable tool for monitoring vegetation at first class level (5 vegetation classes) and second class level(8 vegetation subclasses). The DCT algorithm using multi-characteristic might provide a theoretical basis and general approach to automatic extraction of vegetation types from remote sensing imagery over plateau areas.

  15. Adaptation of a Hyperspectral Atmospheric Correction Algorithm for Multi-spectral Ocean Color Data in Coastal Waters. Chapter 3

    NASA Technical Reports Server (NTRS)

    Gao, Bo-Cai; Montes, Marcos J.; Davis, Curtiss O.

    2003-01-01

    This SIMBIOS contract supports several activities over its three-year time-span. These include certain computational aspects of atmospheric correction, including the modification of our hyperspectral atmospheric correction algorithm Tafkaa for various multi-spectral instruments, such as SeaWiFS, MODIS, and GLI. Additionally, since absorbing aerosols are becoming common in many coastal areas, we are making the model calculations to incorporate various absorbing aerosol models into tables used by our Tafkaa atmospheric correction algorithm. Finally, we have developed the algorithms to use MODIS data to characterize thin cirrus effects on aerosol retrieval.

  16. Strategies for optimizing the phase correction algorithms in Nuclear Magnetic Resonance spectroscopy

    PubMed Central

    2015-01-01

    Nuclear Magnetic Resonance (NMR) spectroscopy is a popular medical diagnostic technique. NMR is also the favourite tool of chemists/biochemists to elucidate the molecular structure of small or big molecules; it is also a widely used tool in material science, in food science etc. In the case of medical diagnosis it allows for determining a metabolic composition of analysed tissue which may support the identification of tumour cells. Precession signal, that is a crucial part of MR phenomenon, contains distortions that must be filtered out before signal analysis. One of such distortions is phase error. Five popular algorithms: Automics, Shanon's entropy minimization, Ernst's method, Dispa and eDispa are presented and discussed. A novel adaptive tuning algorithm for Automics method was developed and numerically optimal solutions to automatic tuning of the other four algorithms were proposed. To validate the performance of the proposed techniques, two experiments were performed - the first one was done with the use of in silico generated data. For all presented methods, the fine tuning strategies significantly increased the correction accuracy. The highest improvement was observed for Automics algorithm, independently of noise level, with relative phase error dropping by average from 10.25% to 2.40% for low noise level and from 12.45% to 2.66% for high noise level. The second validation experiment, done with the use of phantom data, confirmed the in silico results. The obtained accuracy of the estimation of metabolite concentration was at 99.5%. Conclusions The proposed strategies for optimizing the phase correction algorithms significantly improve the accuracy of Nuclear Magnetic Resonance spectroscopy signal analysis. PMID:26329486

  17. Beam-centric algorithm for pretreatment patient position correction in external beam radiation therapy

    SciTech Connect

    Bose, Supratik; Shukla, Himanshu; Maltz, Jonathan

    2010-05-15

    Purpose: In current image guided pretreatment patient position adjustment methods, image registration is used to determine alignment parameters. Since most positioning hardware lacks the full six degrees of freedom (DOF), accuracy is compromised. The authors show that such compromises are often unnecessary when one models the planned treatment beams as part of the adjustment calculation process. The authors present a flexible algorithm for determining optimal realizable adjustments for both step-and-shoot and arc delivery methods. Methods: The beam shape model is based on the polygonal intersection of each beam segment with the plane in pretreatment image volume that passes through machine isocenter perpendicular to the central axis of the beam. Under a virtual six-DOF correction, ideal positions of these polygon vertices are computed. The proposed method determines the couch, gantry, and collimator adjustments that minimize the total mismatch of all vertices over all segments with respect to their ideal positions. Using this geometric error metric as a function of the number of available DOF, the user may select the most desirable correction regime. Results: For a simulated treatment plan consisting of three equally weighted coplanar fixed beams, the authors achieve a 7% residual geometric error (with respect to the ideal correction, considered 0% error) by applying gantry rotation as well as translation and isocentric rotation of the couch. For a clinical head-and-neck intensity modulated radiotherapy plan with seven beams and five segments per beam, the corresponding error is 6%. Correction involving only couch translation (typical clinical practice) leads to a much larger 18% mismatch. Clinically significant consequences of more accurate adjustment are apparent in the dose volume histograms of target and critical structures. Conclusions: The algorithm achieves improvements in delivery accuracy using standard delivery hardware without significantly increasing

  18. A robust background correction algorithm for forensic bloodstain imaging using mean-based contrast adjustment.

    PubMed

    Lee, Wee Chuen; Khoo, Bee Ee; Abdullah, Ahmad Fahmi Lim

    2016-05-01

    Background correction algorithm (BCA) is useful in enhancing the visibility of images captured in crime scenes especially those of untreated bloodstains. Successful implementation of BCA requires all the images to have similar brightness which often proves a problem when using automatic exposure setting in a camera. This paper presents an improved background correction algorithm (BCA) that applies mean-based contrast adjustment as a pre-correction step to adjust the mean brightness of images to be similar before implementing BCA. The proposed modification, namely mean-based adaptive BCA (mABCA) was tested on various image samples captured under different illuminations such as 385 nm, 415 nm and 458 nm. We also evaluated mABCA of two wavelengths (415 nm and 458 nm) and three wavelengths (415 nm, 380 nm and 458 nm) in enhancing untreated bloodstains on different surfaces. The proposed mABCA is found to be more robust in processing images captured in different brightness and thus overcomes the main issue faced in the original BCA. PMID:27162018

  19. Pile-up correction by Genetic Algorithm and Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Kafaee, M.; Saramad, S.

    2009-08-01

    Pile-up distortion is a common problem for high counting rates radiation spectroscopy in many fields such as industrial, nuclear and medical applications. It is possible to reduce pulse pile-up using hardware-based pile-up rejections. However, this phenomenon may not be eliminated completely by this approach and the spectrum distortion caused by pile-up rejection can be increased as well. In addition, inaccurate correction or rejection of pile-up artifacts in applications such as energy dispersive X-ray (EDX) spectrometers can lead to losses of counts, will give poor quantitative results and even false element identification. Therefore, it is highly desirable to use software-based models to predict and correct any recognized pile-up signals in data acquisition systems. The present paper describes two new intelligent approaches for pile-up correction; the Genetic Algorithm (GA) and Artificial Neural Networks (ANNs). The validation and testing results of these new methods have been compared, which shows excellent agreement with the measured data with 60Co source and NaI detector. The Monte Carlo simulation of these new intelligent algorithms also shows their advantages over hardware-based pulse pile-up rejection methods.

  20. An infrared image non-uniformity correction algorithm based on pixels' equivalent integral capacitance

    NASA Astrophysics Data System (ADS)

    Zhang, Shuanglei; Wang, Tao; Xu, Chun; Chen, Fansheng

    2015-04-01

    In the infrared focal plane array (IRFPA) imaging system, the non-uniformity (NU) of IRFPA directly affects the quality of infrared images. Especially applying in the infrared weak small targets detection and tracking system, the impact of the spatial noise caused by the non-uniformity of IRFPA detector, often more serious than the temporal noise of imaging system. In order to effectively correct the non-uniformity of IRFPA detector, we firstly analyze main factors that cause the non-uniformity of IRFPA detector in imaging. Secondly, according to photoelectric conversion mechanism of IRFPA detector, and the analysis of the process of the target energy accumulation and transfer, we propose a calculation method of pixels' integral capacitance. Then according to the calculation results, we correct the original IR image preliminary. Finally, we validate this non-uniformity correction algorithm by processing IR images collected from actual IRFPA imaging system. Results show that the algorithm can effectively restrain the non-uniformity caused by the differences of the pixels' capacitance.

  1. Topology correction of segmented medical images using a fast marching algorithm.

    PubMed

    Bazin, Pierre-Louis; Pham, Dzung L

    2007-11-01

    We present here a new method for correcting the topology of objects segmented from medical images. Whereas previous techniques alter a surface obtained from a binary segmentation of the object, our technique can be applied directly to the image intensities of a probabilistic or fuzzy segmentation, thereby propagating the topology for all isosurfaces of the object. From an analysis of topological changes and critical points in implicit surfaces, we derive a topology propagation algorithm that enforces any desired topology using a fast marching technique. The method has been applied successfully to the correction of the cortical gray matter/white matter interface in segmented brain images and is publicly released as a software plug-in for the MIPAV package. PMID:17942182

  2. Algorithm for Atmospheric and Glint Corrections of Satellite Measurements of Ocean Pigment

    NASA Technical Reports Server (NTRS)

    Fraser, Robert S.; Mattoo, Shana; Yeh, Eueng-Nan; McClain, C. R.

    1997-01-01

    An algorithm is developed to correct satellite measurements of ocean color for atmospheric and surface reflection effects. The algorithm depends on taking the difference between measured and tabulated radiances for deriving water-leaving radiances. 'ne tabulated radiances are related to the measured radiance where the water-leaving radiance is negligible (670 nm). The tabulated radiances are calculated for rough surface reflection, polarization of the scattered light, and multiple scattering. The accuracy of the tables is discussed. The method is validated by simulating the effect of different wind speeds than that for which the lookup table is calculated, and aerosol models different from the maritime model for which the table is computed. The derived water-leaving radiances are accurate enough to compute the pigment concentration with an error of less than q 15% for wind speeds of 6 and 10 m/s and an urban atmosphere with aerosol optical thickness of 0.20 at lambda 443 nm and decreasing to 0.10 at lambda 670 nm. The pigment accuracy is less for wind speeds less than 6 m/s and is about 30% for a model with aeolian dust. On the other hand, in a preliminary comparison with coastal zone color scanner (CZCS) measurements this algorithm and the CZCS operational algorithm produced values of pigment concentration in one image that agreed closely.

  3. An improved atmospheric correction algorithm for applying MERIS data to very turbid inland waters

    NASA Astrophysics Data System (ADS)

    Jaelani, Lalu Muhamad; Matsushita, Bunkei; Yang, Wei; Fukushima, Takehiko

    2015-07-01

    Atmospheric correction (AC) is a necessary process when quantitatively monitoring water quality parameters from satellite data. However, it is still a major challenge to carry out AC for turbid coastal and inland waters. In this study, we propose an improved AC algorithm named N-GWI (new standard Gordon and Wang's algorithms with an iterative process and a bio-optical model) for applying MERIS data to very turbid inland waters (i.e., waters with a water-leaving reflectance at 864.8 nm between 0.001 and 0.01). The N-GWI algorithm incorporates three improvements to avoid certain invalid assumptions that limit the applicability of the existing algorithms in very turbid inland waters. First, the N-GWI uses a fixed aerosol type (coastal aerosol) but permits aerosol concentration to vary at each pixel; this improvement omits a complicated requirement for aerosol model selection based only on satellite data. Second, it shifts the reference band from 670 nm to 754 nm to validate the assumption that the total absorption coefficient at the reference band can be replaced by that of pure water, and thus can avoid the uncorrected estimation of the total absorption coefficient at the reference band in very turbid waters. Third, the N-GWI generates a semi-analytical relationship instead of an empirical one for estimation of the spectral slope of particle backscattering. Our analysis showed that the N-GWI improved the accuracy of atmospheric correction in two very turbid Asian lakes (Lake Kasumigaura, Japan and Lake Dianchi, China), with a normalized mean absolute error (NMAE) of less than 22% for wavelengths longer than 620 nm. However, the N-GWI exhibited poor performance in moderately turbid waters (the NMAE values were larger than 83.6% in the four American coastal waters). The applicability of the N-GWI, which includes both advantages and limitations, was discussed.

  4. Retrieval of atmospheric properties from hyper and multispectral imagery with the FLAASH atmospheric correction algorithm

    NASA Astrophysics Data System (ADS)

    Perkins, Timothy; Adler-Golden, Steven; Matthew, Michael; Berk, Alexander; Anderson, Gail; Gardner, James; Felde, Gerald

    2005-10-01

    Atmospheric Correction Algorithms (ACAs) are used in applications of remotely sensed Hyperspectral and Multispectral Imagery (HSI/MSI) to correct for atmospheric effects on measurements acquired by air and space-borne systems. The Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) algorithm is a forward-model based ACA created for HSI and MSI instruments which operate in the visible through shortwave infrared (Vis-SWIR) spectral regime. Designed as a general-purpose, physics-based code for inverting at-sensor radiance measurements into surface reflectance, FLAASH provides a collection of spectral analysis and atmospheric retrieval methods including: a per-pixel vertical water vapor column estimate, determination of aerosol optical depth, estimation of scattering for compensation of adjacency effects, detection/characterization of clouds, and smoothing of spectral structure resulting from an imperfect atmospheric correction. To further improve the accuracy of the atmospheric correction process, FLAASH will also detect and compensate for sensor-introduced artifacts such as optical smile and wavelength mis-calibration. FLAASH relies on the MODTRANTM radiative transfer (RT) code as the physical basis behind its mathematical formulation, and has been developed in parallel with upgrades to MODTRAN in order to take advantage of the latest improvements in speed and accuracy. For example, the rapid, high fidelity multiple scattering (MS) option available in MODTRAN4 can greatly improve the accuracy of atmospheric retrievals over the 2-stream approximation. In this paper, advanced features available in FLAASH are described, including the principles and methods used to derive atmospheric parameters from HSI and MSI data. Results are presented from processing of Hyperion, AVIRIS, and LANDSAT data.

  5. Research and implementation of the algorithm for unwrapped and distortion correction basing on CORDIC for panoramic image

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenhai; Li, Kejie; Wu, Xiaobing; Zhang, Shujiang

    2008-03-01

    The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and corrected to conventional rectangular image without distortion, which is much more coincident with people's vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction has the lower computation complexity and the architecture for dynamic panoramic image processing has lower hardware cost and power consumption. And the proposed algorithm is valid.

  6. Parallel algorithms of relative radiometric correction for images of TH-1 satellite

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Zhang, Tingtao; Cheng, Jiasheng; Yang, Tao

    2014-05-01

    The first generation of transitive stereo-metric satellites in China, TH-1 Satellite, is able to gain stereo images of three-line-array with resolution of 5 meters, multispectral images of 10 meters, and panchromatic high resolution images of 2 meters. The procedure between level 0 and level 1A of high resolution images is so called relative radiometric correction (RRC for short). The processing algorithm of high resolution images, with large volumes of data, is complicated and time consuming. In order to bring up the processing speed, people in industry commonly apply parallel processing techniques based on CPU or GPU. This article firstly introduces the whole process and each step of the algorithm - that is in application - of RRC for high resolution images in level 0; secondly, the theory and characteristics of MPI (Message Passing Interface) and OpenMP (Open Multi-Processing) parallel programming techniques is briefly described, as well as the superiority for parallel technique in image processing field; thirdly, aiming at each step of the algorithm in application and based on MPI+OpenMP hybrid paradigm, the parallelizability and the strategies of parallelism for three processing steps: Radiometric Correction, Splicing Pieces of TDICCD (Time Delay Integration Charge-Coupled Device) and Gray Level Adjustment among pieces of TDICCD are deeply discussed, and furthermore, deducts the theoretical acceleration rates of each step and the one of whole procedure, according to the processing styles and independence of calculation; for the step Splicing Pieces of TDICCD, two different strategies of parallelism are proposed, which are to be chosen with consideration of hardware capabilities; finally, series of experiments are carried out to verify the parallel algorithms by applying 2-meter panchromatic high resolution images of TH-1 Satellite, and the experimental results are analyzed. Strictly on the basis of former parallel algorithms, the programs in the experiments

  7. Design of an IRFPA nonuniformity correction algorithm to be implemented as a real-time hardware prototype

    NASA Astrophysics Data System (ADS)

    Fenner, Jonathan W.; Simon, Solomon H.; Eden, Dayton D.

    1994-07-01

    As new IR focal plane array (IRFPA) technologies become available, improved methods for coping with array errors must be developed. Traditional methods of nonuniformity correction using simple calibration mode are not adequate to compensate for the inherent nonuniformity and 1/f noise in some arrays. In an effort to compensate for nonuniformity in a HgCdTe IRFPA, and to reduce the effects of 1/f noise over a time interval, a new dynamic neural network (NN) based algorithm was implemented. The algorithm compensates for nonuniformities, and corrects for 1/f noise. A gradient descent algorithm is used with nearest neighbor feedback for training, creating a dynamic model of the IRFPA's gains and offsets, then updating and correcting them continuously. Improvements to the NN include implementation on a IBM 486 computer system, and a close examination of simulated scenes to test the algorithms boundaries. Preliminary designs for a real-time hardware prototype have been developed as well. Simulations were implemented to test the algorithm's ability to correct under a variety of conditions. A wide range of background noise, 1/f noise, object intensities, and background intensities were used. Results indicate that this algorithm can correct efficiently down to the background noise. Our conclusions are that NN based adaptive algorithms will supplement the effectiveness of IRFPA's.

  8. Direct cone-beam cardiac reconstruction algorithm with cardiac banding artifact correction

    SciTech Connect

    Taguchi, Katsuyuki; Chiang, Beshan S.; Hein, Ilmar A.

    2006-02-15

    Multislice helical computed tomography (CT) is a promising noninvasive technique for coronary artery imaging. Various factors can cause inconsistencies in cardiac CT data, which can result in degraded image quality. These inconsistencies may be the result of the patient physiology (e.g., heart rate variations), the nature of the data (e.g., cone-angle), or the reconstruction algorithm itself. An algorithm which provides the best temporal resolution for each slice, for example, often provides suboptimal image quality for the entire volume since the cardiac temporal resolution (TRc) changes from slice to slice. Such variations in TRc can generate strong banding artifacts in multi-planar reconstruction images or three-dimensional images. Discontinuous heart walls and coronary arteries may compromise the accuracy of the diagnosis. A {beta}-blocker is often used to reduce and stabilize patients' heart rate but cannot eliminate the variation. In order to obtain robust and optimal image quality, a software solution that increases the temporal resolution and decreases the effect of heart rate is highly desirable. This paper proposes an ECG-correlated direct cone-beam reconstruction algorithm (TCOT-EGR) with cardiac banding artifact correction (CBC) and disconnected projections redundancy compensation technique (DIRECT). First the theory and analytical model of the cardiac temporal resolution is outlined. Next, the performance of the proposed algorithms is evaluated by using computer simulations as well as patient data. It will be shown that the proposed algorithms enhance the robustness of the image quality against inconsistencies by guaranteeing smooth transition of heart cycles used in reconstruction.

  9. Validation and robustness of an atmospheric correction algorithm for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Boucher, Yannick; Poutier, Laurent; Achard, Veronique; Lenot, Xavier; Miesch, Christophe

    2002-08-01

    The Optics Department of ONERA has developed and implemented an inverse algorithm, COSHISE, to correct hyperspectral images of the atmosphere effects in the visible-NIR-SWIR domain (0,4-2,5 micrometers ). This algorithm automatically determine the integrated water-vapor content for each pixel, from the radiance at sensor level by using a LIRR-type (Linear Regression Ratio) technique. It then retrieves the spectral reflectance at ground level using atmospheric parameters computed with Modtran4, included the water-vapor spatial dependence as obtained in the first stop. The adjacency effects are taken into account using spectral kernels obtained by two Monte-Carlo codes. Results obtained with COCHISE code on real hyperspectral data are first compared to ground based reflectance measurements. AVIRIS images of Railroad Valley Playa, CA, and HyMap images of Hartheim, France, are use. The inverted reflectance agrees perfectly with the measurement at ground level for the AVIRIS data set, which validates COCHISE algorithm/ for the HyMap data set, the results are still good but cannot be considered as validating the code. The robustness of COCHISE code is evaluated. For this, spectral radiance images are modeled at the sensor level, with the direct algorithm COMANCHE, which is the reciprocal code of COCHISE. The COCHISE algorithm is then used to compute the reflectance at ground level from the simulated at-sensor radiance. A sensitivity analysis has been performed, as a function of errors on several atmospheric parameter and instruments defaults, by comparing the retrieved reflectance with the original one. COCHISE code shows a quite good robustness to errors on input parameter, except for aerosol type.

  10. An improved model of charge transfer inefficiency and correction algorithm for the Hubble Space Telescope

    NASA Astrophysics Data System (ADS)

    Massey, Richard; Schrabback, Tim; Cordes, Oliver; Marggraf, Ole; Israel, Holger; Miller, Lance; Hall, David; Cropper, Mark; Prod'homme, Thibaut; Niemi, Sami-Matias

    2014-03-01

    Charge-coupled device (CCD) detectors, widely used to obtain digital imaging, can be damaged by high energy radiation. Degraded images appear blurred, because of an effect known as Charge Transfer Inefficiency (CTI), which trails bright objects as the image is read out. It is often possible to correct most of the trailing during post-processing, by moving flux back to where it belongs. We compare several popular algorithms for this: quantifying the effect of their physical assumptions and tradeoffs between speed and accuracy. We combine their best elements to construct a more accurate model of damaged CCDs in the Hubble Space Telescope's Advanced Camera for Surveys/Wide Field Channel, and update it using data up to early 2013. Our algorithm now corrects 98 per cent of CTI trailing in science exposures, a substantial improvement over previous work. Further progress will be fundamentally limited by the presence of read noise. Read noise is added after charge transfer so does not get trailed - but it is incorrectly untrailed during post-processing.

  11. A background correction algorithm for Van Allen Probes MagEIS electron flux measurements

    SciTech Connect

    Claudepierre, S. G.; O'Brien, T. P.; Blake, J. B.; Fennell, J. F.; Roeder, J. L.; Clemmons, J. H.; Looper, M. D.; Mazur, J. E.; Mulligan, T. M.; Spence, H. E.; Reeves, G. D.; Friedel, R. H. W.; Henderson, M. G.; Larsen, B. A.

    2015-07-14

    We describe an automated computer algorithm designed to remove background contamination from the Van Allen Probes Magnetic Electron Ion Spectrometer (MagEIS) electron flux measurements. We provide a detailed description of the algorithm with illustrative examples from on-orbit data. We find two primary sources of background contamination in the MagEIS electron data: inner zone protons and bremsstrahlung X-rays generated by energetic electrons interacting with the spacecraft material. Bremsstrahlung X-rays primarily produce contamination in the lower energy MagEIS electron channels (~30–500 keV) and in regions of geospace where multi-M eV electrons are present. Inner zone protons produce contamination in all MagEIS energy channels at roughly L < 2.5. The background-corrected MagEIS electron data produce a more accurate measurement of the electron radiation belts, as most earlier measurements suffer from unquantifiable and uncorrectable contamination in this harsh region of the near-Earth space environment. These background-corrected data will also be useful for spacecraft engineering purposes, providing ground truth for the near-Earth electron environment and informing the next generation of spacecraft design models (e.g., AE9).

  12. A background correction algorithm for Van Allen Probes MagEIS electron flux measurements

    DOE PAGESBeta

    Claudepierre, S. G.; O'Brien, T. P.; Blake, J. B.; Fennell, J. F.; Roeder, J. L.; Clemmons, J. H.; Looper, M. D.; Mazur, J. E.; Mulligan, T. M.; Spence, H. E.; et al

    2015-07-14

    We describe an automated computer algorithm designed to remove background contamination from the Van Allen Probes Magnetic Electron Ion Spectrometer (MagEIS) electron flux measurements. We provide a detailed description of the algorithm with illustrative examples from on-orbit data. We find two primary sources of background contamination in the MagEIS electron data: inner zone protons and bremsstrahlung X-rays generated by energetic electrons interacting with the spacecraft material. Bremsstrahlung X-rays primarily produce contamination in the lower energy MagEIS electron channels (~30–500 keV) and in regions of geospace where multi-M eV electrons are present. Inner zone protons produce contamination in all MagEIS energymore » channels at roughly L < 2.5. The background-corrected MagEIS electron data produce a more accurate measurement of the electron radiation belts, as most earlier measurements suffer from unquantifiable and uncorrectable contamination in this harsh region of the near-Earth space environment. These background-corrected data will also be useful for spacecraft engineering purposes, providing ground truth for the near-Earth electron environment and informing the next generation of spacecraft design models (e.g., AE9).« less

  13. Algorithms for calculating mass-velocity and Darwin relativistic corrections with n-electron explicitly correlated Gaussians with shifted centers

    NASA Astrophysics Data System (ADS)

    Stanke, Monika; Palikot, Ewa; Adamowicz, Ludwik

    2016-05-01

    Algorithms for calculating the leading mass-velocity (MV) and Darwin (D) relativistic corrections are derived for electronic wave functions expanded in terms of n-electron explicitly correlated Gaussian functions with shifted centers and without pre-exponential angular factors. The algorithms are implemented and tested in calculations of MV and D corrections for several points on the ground-state potential energy curves of the H2 and LiH molecules. The algorithms are general and can be applied in calculations of systems with an arbitrary number of electrons.

  14. Algorithms for calculating mass-velocity and Darwin relativistic corrections with n-electron explicitly correlated Gaussians with shifted centers.

    PubMed

    Stanke, Monika; Palikot, Ewa; Adamowicz, Ludwik

    2016-05-01

    Algorithms for calculating the leading mass-velocity (MV) and Darwin (D) relativistic corrections are derived for electronic wave functions expanded in terms of n-electron explicitly correlated Gaussian functions with shifted centers and without pre-exponential angular factors. The algorithms are implemented and tested in calculations of MV and D corrections for several points on the ground-state potential energy curves of the H2 and LiH molecules. The algorithms are general and can be applied in calculations of systems with an arbitrary number of electrons. PMID:27155619

  15. a New Control Points Based Geometric Correction Algorithm for Airborne Push Broom Scanner Images Without On-Board Data

    NASA Astrophysics Data System (ADS)

    Strakhov, P.; Badasen, E.; Shurygin, B.; Kondranin, T.

    2016-06-01

    Push broom scanners, such as video spectrometers (also called hyperspectral sensors), are widely used in the present. Usage of scanned images requires accurate geometric correction, which becomes complicated when imaging platform is airborne. This work contains detailed description of a new algorithm developed for processing of such images. The algorithm requires only user provided control points and is able to correct distortions caused by yaw, flight speed and height changes. It was tested on two series of airborne images and yielded RMS error values on the order of 7 meters (3-6 source image pixels) as compared to 13 meters for polynomial-based correction.

  16. Smooth particle hydrodynamics: importance of correction terms in adaptive resolution algorithms

    NASA Astrophysics Data System (ADS)

    Alimi, J.-M.; Serna, A.; Pastor, C.; Bernabeu, G.

    2003-11-01

    We describe TREEASPH, a new code to evolve self-gravitating fluids, both with and without a collisionless component. In TREEASPH, gravitational forces are computed from a hierarchical tree algorithm (TREEcode), while hydrodynamic properties are computed by using a SPH method that includes the ∇h correction terms appearing when the spatial resolution h(t,r) is not a constant. Another important feature, which considerably increases the code efficiency on sequential and vectorial computers, is that time-stepping is performed from a PEC scheme (Predict-Evaluate-Correct) modified to allow for individual timesteps. Some authors have previously noted that the ∇h correction terms are needed to avoid the introduction on simulations of a non-physical entropy. By using TREEASPH we show here that, in cosmological simulations, this non-physical entropy has a negative sign. As a consequence, when the ∇h terms are neglected, the density peaks associated to shock fronts are overestimated. This in turn results in an overestimated efficiency of star-formation processes.

  17. Autoregressive model based algorithm for correcting motion and serially correlated errors in fNIRS

    PubMed Central

    Barker, Jeffrey W.; Aarabi, Ardalan; Huppert, Theodore J.

    2013-01-01

    Systemic physiology and motion-induced artifacts represent two major sources of confounding noise in functional near infrared spectroscopy (fNIRS) imaging that can reduce the performance of analyses and inflate false positive rates (i.e., type I errors) of detecting evoked hemodynamic responses. In this work, we demonstrated a general algorithm for solving the general linear model (GLM) for both deconvolution (finite impulse response) and canonical regression models based on designing optimal pre-whitening filters using autoregressive models and employing iteratively reweighted least squares. We evaluated the performance of the new method by performing receiver operating characteristic (ROC) analyses using synthetic data, in which serial correlations, motion artifacts, and evoked responses were controlled via simulations, as well as using experimental data from children (3–5 years old) as a source baseline physiological noise and motion artifacts. The new method outperformed ordinary least squares (OLS) with no motion correction, wavelet based motion correction, or spline interpolation based motion correction in the presence of physiological and motion related noise. In the experimental data, false positive rates were as high as 37% when the estimated p-value was 0.05 for the OLS methods. The false positive rate was reduced to 5–9% with the proposed method. Overall, the method improves control of type I errors and increases performance when motion artifacts are present. PMID:24009999

  18. Algorithms Based on CWT and Classifiers to Control Cardiac Alterations and Stress Using an ECG and a SCR

    PubMed Central

    Villarejo, María Viqueira; Zapirain, Begoña García; Zorrilla, Amaia Méndez

    2013-01-01

    This paper presents the results of using a commercial pulsimeter as an electrocardiogram (ECG) for wireless detection of cardiac alterations and stress levels for home control. For these purposes, signal processing techniques (Continuous Wavelet Transform (CWT) and J48) have been used, respectively. The designed algorithm analyses the ECG signal and is able to detect the heart rate (99.42%), arrhythmia (93.48%) and extrasystoles (99.29%). The detection of stress level is complemented with Skin Conductance Response (SCR), whose success is 94.02%. The heart rate variability does not show added value to the stress detection in this case. With this pulsimeter, it is possible to prevent and detect anomalies for a non-intrusive way associated to a telemedicine system. It is also possible to use it during physical activity due to the fact the CWT minimizes the motion artifacts. PMID:23666135

  19. Correction.

    PubMed

    2015-11-01

    In the article by Heuslein et al, which published online ahead of print on September 3, 2015 (DOI: 10.1161/ATVBAHA.115.305775), a correction was needed. Brett R. Blackman was added as the penultimate author of the article. The article has been corrected for publication in the November 2015 issue. PMID:26490278

  20. A procedure for testing the quality of LANDSAT atmospheric correction algorithms

    NASA Technical Reports Server (NTRS)

    Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.

    1982-01-01

    There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.

  1. An automatic stain removal algorithm of series aerial photograph based on flat-field correction

    NASA Astrophysics Data System (ADS)

    Wang, Gang; Yan, Dongmei; Yang, Yang

    2010-10-01

    The dust on the camera's lens will leave dark stains on the image. Calibrating and compensating the intensity of the stained pixels play an important role in the airborne image processing. This article introduces an automatic compensation algorithm for the dark stains. It's based on the theory of flat-field correction. We produced a whiteboard reference image by aggregating hundreds of images recorded in one flight and use their average pixel values to simulate the uniform white light irradiation. Then we constructed a look-up table function based on this whiteboard image to calibrate the stained image. The experiment result shows that the proposed procedure can remove lens stains effectively and automatically.

  2. A Local Corrections Algorithm for Solving Poisson's Equation inThree Dimensions

    SciTech Connect

    McCorquodale, Peter; Colella, Phillip; Balls, Gregory T.; Baden,Scott B.

    2006-10-30

    We present a second-order accurate algorithm for solving thefree-space Poisson's equation on a locally-refined nested grid hierarchyin three dimensions. Our approach is based on linear superposition oflocal convolutions of localized charge distributions, with the nonlocalcoupling represented on coarser grids. There presentation of the nonlocalcoupling on the local solutions is based on Anderson's Method of LocalCorrections and does not require iteration between different resolutions.A distributed-memory parallel implementation of this method is observedto have a computational cost per grid point less than three times that ofa standard FFT-based method on a uniform grid of the same resolution, andscales well up to 1024 processors.

  3. Accuracy of inhomogeneity correction algorithm in intensity-modulated radiotherapy of head-and-neck tumors

    SciTech Connect

    Yoon, Myonggeun; Lee, Doo-Hyun; Shin, Dongho; Lee, Se Byeong; Park, Sung Yong . E-mail: cool_park@ncc.re.kr; Cho, Kwan Ho

    2007-04-01

    We examined the degree of calculated-to-measured dose difference for nasopharyngeal target volume in intensity-modulated radiotherapy (IMRT) based on the observed/expected ratio using patient anatomy with humanoid head-and-neck phantom. The plans were designed with a clinical treatment planning system that uses a measurement-based pencil beam dose-calculation algorithm. Two kinds of IMRT plans, which give a direct indication of the error introduced in routine treatment planning, were categorized and evaluated. The experimental results show that when the beams pass through the oral cavity in anthropomorphic head-and-neck phantom, the average dose difference becomes significant, revealing about 10% dose difference to prescribed dose at isocenter. To investigate both the physical reasons of the dose discrepancy and the inhomogeneity effect, we performed the 10 cases of IMRT quality assurance (QA) with plastic and humanoid phantoms. Our result suggests that the transient electronic disequilibrium with the increased lateral electron range may cause the inaccuracy of dose calculation algorithm, and the effectiveness of the inhomogeneity corrections used in IMRT plans should be evaluated to ensure meaningful quality assurance and delivery.

  4. A novel image-based motion correction algorithm on ultrasonic image

    NASA Astrophysics Data System (ADS)

    Wang, Xuan; Li, Yaqin; Li, Shigao

    2015-12-01

    Lung respiratory movement can cause errors in the operation of image navigation surgery and they are the main errors in the navigation system. To solve this problem, the image-based motion correction strategy should be proposed to quickly correct the respiratory motion in the image sequence. So, the commercial ultrasound machine can display contrast and tissue images simultaneously. In the paper, a convenient, simple and easy-to-use breathing model whose precision was close to the sub-voxel was proposed. The first, in the clinical case the low gray-level variation in the tissue images, motion parameters were first calculated according to the actual lung movement information of each point the tissue images are registered by using template matching with sum of absolute differences metric. Finally, the similar images are selected by a double-selection method which requires global and local threshold setting. The generic breathing model was constructed based on all the sample data. The results of experiments show the algorithm can reduce the original errors caused by breath movement heavily.

  5. An algorithmic strategy for selecting a surgical approach in cervical deformity correction.

    PubMed

    Hann, Shannon; Chalouhi, Nohra; Madineni, Ravichandra; Vaccaro, Alexander R; Albert, Todd J; Harrop, James; Heller, Joshua E

    2014-05-01

    Adult degenerative cervical kyphosis is a debilitating disease that often requires complex surgical management. Young spine surgeons, residents, and fellows are often confused as to which surgical approach to choose due to lack of experience, absence of a systematic method of surgical management, and today's plethora of information regarding surgical techniques. Although surgeons may be able to perform anterior, posterior, or combined (360°) approaches to the cervical spine, many struggle to rationally choose an appropriate approach for deformity correction. The authors introduce an algorithm based on morphology and pathology of adult cervical kyphosis to help the surgeon select the appropriate approach when performing cervical deformity surgery. Cervical deformities are categorized into 5 different prevalent morphological types encountered in clinical settings. A surgical approach tailored to each category/type of deformity is then discussed, with a concrete case illustration provided for each. Preoperative assessment of kyphosis, determination of the goal for surgery, and the complications associated with cervical deformity correction are also summarized. This article's goal is to assist with understanding the big picture for surgical management in cervical spinal deformity. PMID:24785487

  6. Intensity Inhomogeneity Correction of Structural MR Images: A Data-Driven Approach to Define Input Algorithm Parameters.

    PubMed

    Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante

    2016-01-01

    Intensity non-uniformity (INU) in magnetic resonance (MR) imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CVWM), the coefficient of variation of gray matter (CVGM), and the coefficient of joint variation between white matter and gray matter (CJV). Using simulated MR data, we observed the CJV to be more accurate than CVWM and CVGM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T, and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images. PMID:27014050

  7. Intensity Inhomogeneity Correction of Structural MR Images: A Data-Driven Approach to Define Input Algorithm Parameters

    PubMed Central

    Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante

    2016-01-01

    Intensity non-uniformity (INU) in magnetic resonance (MR) imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CVWM), the coefficient of variation of gray matter (CVGM), and the coefficient of joint variation between white matter and gray matter (CJV). Using simulated MR data, we observed the CJV to be more accurate than CVWM and CVGM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T, and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images. PMID:27014050

  8. Correction.

    PubMed

    2015-12-01

    In the article by Narayan et al (Narayan O, Davies JE, Hughes AD, Dart AM, Parker KH, Reid C, Cameron JD. Central aortic reservoir-wave analysis improves prediction of cardiovascular events in elderly hypertensives. Hypertension. 2015;65:629–635. doi: 10.1161/HYPERTENSIONAHA.114.04824), which published online ahead of print December 22, 2014, and appeared in the March 2015 issue of the journal, some corrections were needed.On page 632, Figure, panel A, the label PRI has been corrected to read RPI. In panel B, the text by the upward arrow, "10% increase in kd,” has been corrected to read, "10% decrease in kd." The corrected figure is shown below.The authors apologize for these errors. PMID:26558821

  9. A simplified procedure for correcting both errors and erasures of a Reed-Solomon code using the Euclidean algorithm

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.

    1987-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.

  10. Correction

    NASA Astrophysics Data System (ADS)

    1995-04-01

    Seismic images of the Brooks Range, Arctic Alaska, reveal crustal-scale duplexing: Correction Geology, v. 23, p. 65 68 (January 1995) The correct Figure 4A, for the loose insert, is given here. See Figure 4A below. Corrected inserts will be available to those requesting copies of the article from the senior author, Gary S. Fuis, U.S. Geological Survey, 345 Middlefield Road, Menlo Park, CA 94025. Figure 4A. P-wave velocity model of Brooks Range region (thin gray contours) with migrated wide-angle reflections (heavy red lines) and migreated vertical-incidence reflections (short black lines) superimposed. Velocity contour interval is 0.25 km/s; 4,5, and 6 km/s contours are labeled. Estimated error in velocities is one contour interval. Symbols on faults shown at top are as in Figure 2 caption.

  11. Denoising Algorithm for the Pixel-Response Non-Uniformity Correction of a Scientific CMOS Under Low Light Conditions

    NASA Astrophysics Data System (ADS)

    Hu, Changmiao; Bai, Yang; Tang, Ping

    2016-06-01

    We present a denoising algorithm for the pixel-response non-uniformity correction of a scientific complementary metal-oxide-semiconductor (CMOS) image sensor, which captures images under extremely low-light conditions. By analyzing the integrating sphere experimental data, we present a pixel-by-pixel flat-field denoising algorithm to remove this fixed pattern noise, which occur in low-light conditions and high pixel response readouts. The response of the CMOS image sensor imaging system to the uniform radiance field shows a high level of spatial uniformity after the denoising algorithm has been applied.

  12. Correction.

    PubMed

    2016-02-01

    Neogi T, Jansen TLTA, Dalbeth N, et al. 2015 Gout classification criteria: an American College of Rheumatology/European League Against Rheumatism collaborative initiative. Ann Rheum Dis 2015;74:1789–98. The name of the 20th author was misspelled. The correct spelling is Janitzia Vazquez-Mellado. We regret the error. PMID:26881284

  13. Feature Selection and Effective Classifiers.

    ERIC Educational Resources Information Center

    Deogun, Jitender S.; Choubey, Suresh K.; Raghavan, Vijay V.; Sever, Hayri

    1998-01-01

    Develops and analyzes four algorithms for feature selection in the context of rough set methodology. Experimental results confirm the expected relationship between the time complexity of these algorithms and the classification accuracy of the resulting upper classifiers. When compared, results of upper classifiers perform better than lower…

  14. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems

    PubMed Central

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045

  15. Enhancement of seminal stains using background correction algorithm with colour filters.

    PubMed

    Lee, Wee Chuen; Khoo, Bee Ee; Abdullah, Ahmad Fahmi Lim

    2016-06-01

    Evidence in crime scenes available in the form of biological stains which cannot be visualized during naked eye examination can be detected by imaging their fluorescence using a combination of excitation lights and suitable filters. These combinations selectively allow the passage of fluorescence light emitted from the targeted stains. However, interference from the fluorescence generated by many of the surface materials bearing the stains often renders it difficult to visualize the stains during forensic photography. This report describes the use of background correction algorithm (BCA) to enhance the visibility of seminal stain, a biological evidence that fluoresces. While earlier reports described the use of narrow band-pass filters for other fluorescing evidences, here, we utilize BCA to enhance images captured using commonly available colour filters, yellow, orange and red. Mean-based contrast adjustment was incorporated into BCA to adjust the background brightness for achieving similarity of images' background appearance, a crucial step for ensuring success while implementing BCA. Experiment results demonstrated the effectiveness of our proposed colour filters' approach using the improved BCA in enhancing the visibility of seminal stains in varying dilutions on selected surfaces. PMID:27061146

  16. An analytical algorithm for skew-slit imaging geometry with nonuniform attenuation correction

    SciTech Connect

    Huang Qiu; Zeng, Gengsheng L.

    2006-04-15

    The pinhole collimator is currently the collimator of choice in small animal single photon emission computed tomography (SPECT) imaging because it can provide high spatial resolution and reasonable sensitivity when the animal is placed very close to the pinhole. It is well known that if the collimator rotates around the object (e.g., a small animal) in a circular orbit to form a cone-beam imaging geometry with a planar trajectory, the acquired data are not sufficient for an exact artifact-free image reconstruction. In this paper a novel skew-slit collimator is mounted instead of the pinhole collimator in order to significantly reduce the image artifacts caused by the geometry. The skew-slit imaging geometry is a more generalized version of the pinhole imaging geometry. The multiple pinhole geometry can also be extended to the multiple-skew-slit geometry. An analytical algorithm for image reconstruction based on the tilted fan-beam inversion is developed with nonuniform attenuation compensation. Numerical simulation shows that the axial artifacts are evidently suppressed in the skew-slit images compared to the pinhole images and the attenuation correction is effective.

  17. Reduction of large set data transmission using algorithmically corrected model-based techniques for bandwidth efficiency

    NASA Astrophysics Data System (ADS)

    Khair, Joseph Daniel

    Communication requirements and demands on deployed systems are increasing daily. This increase is due to the desire for more capability, but also, due to the changing landscape of threats on remote vehicles. As such, it is important that we continue to find new and innovative ways to transmit data to and from these remote systems, consistent with this changing landscape. Specifically, this research shows that data can be transmitted to a remote system effectively and efficiently with a model-based approach using real-time updates, called Algorithmically Corrected Model-based Technique (ACMBT), resulting in substantial savings in communications overhead. To demonstrate this model-based data transmission technique, a hardware-based test fixture was designed and built. Execution and analysis software was created to perform a series of characterizations demonstrating the effectiveness of the new transmission method. The new approach was compared to a traditional transmission approach in the same environment, and the results were analyzed and presented. A Figure of Merit (FOM) was devised and presented to allow standardized comparison of traditional and proposed data transmission methodologies alongside bandwidth utilization metrics. The results of this research have successfully shown the model-based technique to be feasible. Additionally, this research has opened the trade space for future discussion and implementation of this technique.

  18. Embedded nonuniformity correction in infrared focal plane arrays using the Constant Range algorithm

    NASA Astrophysics Data System (ADS)

    Redlich, Rodolfo; Figueroa, Miguel; Torres, Sergio N.; Pezoa, Jorge E.

    2015-03-01

    We present a digital fixed-point architecture that performs real-time nonuniformity correction in infrared (IR) focal plane arrays using the Constant Range algorithm. The circuit estimates and compensates online the gains and offsets of a first-order nonuniformity model using pixel statistics from the video stream. We demonstrate our architecture with a prototype built on a Xilinx Spartan-6 XC6SLX45T field-programmable gate array (FPGA), which can process an IR video stream from a FLIR Tau 2 long-wave IR camera with a resolution of 640 × 480 14-bit pixels at up to 238 frames per second (fps) with low resource utilization and adds only 13 mW to the FPGA power. Post-layout simulations of a custom integrated circuit implementation of the architecture on a 32 nm CMOS process show that the circuit can operate at up to 900 fps at the same resolution, and consume less than 4.5 mW.

  19. Correction.

    PubMed

    2016-02-01

    In the article by Guessous et al (Guessous I, Pruijm M, Ponte B, Ackermann D, Ehret G, Ansermot N, Vuistiner P, Staessen J, Gu Y, Paccaud F, Mohaupt M, Vogt B, Pechère-Bertschi A, Martin PY, Burnier M, Eap CB, Bochud M. Associations of ambulatory blood pressure with urinary caffeine and caffeine metabolite excretions. Hypertension. 2015;65:691–696. doi: 10.1161/HYPERTENSIONAHA.114.04512), which published online ahead of print December 8, 2014, and appeared in the March 2015 issue of the journal, a correction was needed.One of the author surnames was misspelled. Antoinette Pechère-Berstchi has been corrected to read Antoinette Pechère-Bertschi.The authors apologize for this error. PMID:26763012

  20. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

    NASA Astrophysics Data System (ADS)

    Gu, Xuejun; Jelen, Urszula; Li, Jinsheng; Jia, Xun; Jiang, Steve B.

    2011-06-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (~5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.

  1. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation.

    PubMed

    Gu, Xuejun; Jelen, Urszula; Li, Jinsheng; Jia, Xun; Jiang, Steve B

    2011-06-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (∼5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning. PMID:21558589

  2. A robust in-situ warp-correction algorithm for VISAR streak camera data at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.

    2015-02-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high energy density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  3. A Robust In-Situ Warp-Correction Algorithm For VISAR Streak Camera Data at the National Ignition Facility

    SciTech Connect

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.

    2015-01-12

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  4. New Algorithm for Extracting Motion Information from PROPELLER Data and Head Motion Correction in T1-Weighted MRI.

    PubMed

    Feng, Yanqiu; Chen, Wufan

    2005-01-01

    PROPELLER (Periodically Rotated Overlapping ParallEl Lines with Enhanced Reconstruction) MRI, proposed by J. G. Pipe [1], offers a novel and effective means for compensating motion. For the reconstruction of PROPLLER data, algorithms to reliably and accurately extract inter-strip motion from data in central overlapped area are crucial to motion artifacts suppression. When implemented on T1-weighted MR data, the reconstruction algorithm, with motion estimated by registration based on maximizing correlation energy in frequency domain (CF), produces images with low quality due to the inaccurate estimation of motion. In this paper, a new algorithm is proposed for motion estimation based on the registration by maximizing mutual information in spatial domain (MIS). Furthermore, the optimization process is initialized by CF algorithm, so the algorithm is abbreviated as CF-MIS algorithm in this paper. With phantom and in vivo MR imaging, the CF-MIS algorithm was shown to be of higher accuracy in rotation estimation than CF algorithm. Consequently, the head motion in T1-weighted PROPELLER MRI was better corrected. PMID:17282454

  5. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm

    PubMed Central

    McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687

  6. Correction.

    PubMed

    2015-05-22

    The Circulation Research article by Keith and Bolli (“String Theory” of c-kitpos Cardiac Cells: A New Paradigm Regarding the Nature of These Cells That May Reconcile Apparently Discrepant Results. Circ Res. 2015:116:1216-1230. doi: 10.1161/CIRCRESAHA.116.305557) states that van Berlo et al (2014) observed that large numbers of fibroblasts and adventitial cells, some smooth muscle and endothelial cells, and rare cardiomyocytes originated from c-kit positive progenitors. However, van Berlo et al reported that only occasional fibroblasts and adventitial cells derived from c-kit positive progenitors in their studies. Accordingly, the review has been corrected to indicate that van Berlo et al (2014) observed that large numbers of endothelial cells, with some smooth muscle cells and fibroblasts, and more rarely cardiomyocytes, originated from c-kit positive progenitors in their murine model. The authors apologize for this error, and the error has been noted and corrected in the online version of the article, which is available at http://circres.ahajournals.org/content/116/7/1216.full ( PMID:25999426

  7. Correction

    NASA Astrophysics Data System (ADS)

    1998-12-01

    Alleged mosasaur bite marks on Late Cretaceous ammonites are limpet (patellogastropod) home scars Geology, v. 26, p. 947 950 (October 1998) This article had the following printing errors: p. 947, Abstract, line 11, “sepia” should be “septa” p. 947, 1st paragraph under Introduction, line 2, “creep” should be “deep” p. 948, column 1, 2nd paragraph, line 7, “creep” should be “deep” p. 949, column 1, 1st paragraph, line 1, “creep” should be “deep” p. 949, column 1, 1st paragraph, line 5, “19774” should be “1977)” p. 949, column 1, 4th paragraph, line 7, “in particular” should be “In particular” CORRECTION Mammalian community response to the latest Paleocene thermal maximum: An isotaphonomic study in the northern Bighorn Basin, Wyoming Geology, v. 26, p. 1011 1014 (November 1998) An error appeared in the References Cited. The correct reference appears below: Fricke, H. C., Clyde, W. C., O'Neil, J. R., and Gingerich, P. D., 1998, Evidence for rapid climate change in North America during the latest Paleocene thermal maximum: Oxygen isotope compositions of biogenic phosphate from the Bighorn Basin (Wyoming): Earth and Planetary Science Letters, v. 160, p. 193 208.

  8. Phase spectrum algorithm for correction of time distortion in a wavelength demultiplexing analog-to-digital converter

    NASA Astrophysics Data System (ADS)

    Fu, Xin; Zhang, Hongming; Yao, Minyu

    2010-05-01

    An algorithm based on phase spectrum analysis is proposed that can be used to correct the timing distortion between the multiple parallel demultiplexed post-sampling pulse trains in wavelength demultiplexing analog-to-digital converters. The algorithm is theoretically presented and its operational principle is explained. The algorithm is then applied to two parallel demultiplexed post-sampling signals from a proof-of-principle system and fairly good results are obtained. This algorithm is potentially applicable in other opto-electronic hybrid systems where an interleaving and/or multiplexing mechanism is utilized, such as optical time-division multiplexing and optical clock division systems, photonic arbitrary waveform generators, and so on.

  9. A kurtosis-based wavelet algorithm for motion artifact correction of fNIRS data.

    PubMed

    Chiarelli, Antonio M; Maclin, Edward L; Fabiani, Monica; Gratton, Gabriele

    2015-05-15

    Movements are a major source of artifacts in functional Near-Infrared Spectroscopy (fNIRS). Several algorithms have been developed for motion artifact correction of fNIRS data, including Principal Component Analysis (PCA), targeted Principal Component Analysis (tPCA), Spline Interpolation (SI), and Wavelet Filtering (WF). WF is based on removing wavelets with coefficients deemed to be outliers based on their standardized scores, and it has proven to be effective on both synthetized and real data. However, when the SNR is high, it can lead to a reduction of signal amplitude. This may occur because standardized scores inherently adapt to the noise level, independently of the shape of the distribution of the wavelet coefficients. Higher-order moments of the wavelet coefficient distribution may provide a more diagnostic index of wavelet distribution abnormality than its variance. Here we introduce a new procedure that relies on eliminating wavelets that contribute to generate a large fourth-moment (i.e., kurtosis) of the coefficient distribution to define "outliers" wavelets (kurtosis-based Wavelet Filtering, kbWF). We tested kbWF by comparing it with other existing procedures, using simulated functional hemodynamic responses added to real resting-state fNIRS recordings. These simulations show that kbWF is highly effective in eliminating transient noise, yielding results with higher SNR than other existing methods over a wide range of signal and noise amplitudes. This is because: (1) the procedure is iterative; and (2) kurtosis is more diagnostic than variance in identifying outliers. However, kbWF does not eliminate slow components of artifacts whose duration is comparable to the total recording time. PMID:25747916

  10. Classifying Microorganisms.

    ERIC Educational Resources Information Center

    Baker, William P.; Leyva, Kathryn J.; Lang, Michael; Goodmanis, Ben

    2002-01-01

    Focuses on an activity in which students sample air at school and generate ideas about how to classify the microorganisms they observe. The results are used to compare air quality among schools via the Internet. Supports the development of scientific inquiry and technology skills. (DDR)

  11. Quantile-based classifiers

    PubMed Central

    Hennig, C.; Viroli, C.

    2016-01-01

    Classification with small samples of high-dimensional data is important in many application areas. Quantile classifiers are distance-based classifiers that require a single parameter, regardless of the dimension, and classify observations according to a sum of weighted componentwise distances of the components of an observation to the within-class quantiles. An optimal percentage for the quantiles can be chosen by minimizing the misclassification error in the training sample. It is shown that this choice is consistent for the classification rule with the asymptotically optimal quantile and that under some assumptions, as the number of variables goes to infinity, the probability of correct classification converges to unity. The effect of skewness of the distributions of the predictor variables is discussed. The optimal quantile classifier gives low misclassification rates in a comprehensive simulation study and in a real-data application. PMID:27279668

  12. Ground based measurements on reflectance towards validating atmospheric correction algorithms on IRS-P6 AWiFS data

    NASA Astrophysics Data System (ADS)

    Rani Sharma, Anu; Kharol, Shailesh Kumar; Kvs, Badarinath; Roy, P. S.

    In Earth observation, the atmosphere has a non-negligible influence on the visible and infrared radiation which is strong enough to modify the reflected electromagnetic signal and at-target reflectance. Scattering of solar irradiance by atmospheric molecules and aerosol generates path radiance, which increases the apparent surface reflectance over dark surfaces while absorption by aerosols and other molecules in the atmosphere causes loss of brightness to the scene, as recorded by the satellite sensor. In order to derive precise surface reflectance from satellite image data, it is indispensable to apply the atmospheric correction which serves to remove the effects of molecular and aerosol scattering. In the present study, we have implemented a fast atmospheric correction algorithm to IRS-P6 AWiFS satellite data which can effectively retrieve surface reflectance under different atmospheric and surface conditions. The algorithm is based on MODIS climatology products and simplified use of Second Simulation of Satellite Signal in Solar Spectrum (6S) radiative transfer code, which is used to generate look-up-tables (LUTs). The algorithm requires information on aerosol optical depth for correcting the satellite dataset. The proposed method is simple and easy to implement for estimating surface reflectance from the at sensor recorded signal, on a per pixel basis. The atmospheric correction algorithm has been tested for different IRS-P6 AWiFS False color composites (FCC) covering the ICRISAT Farm, Patancheru, Hyderabad, India under varying atmospheric conditions. Ground measurements of surface reflectance representing different land use/land cover, i.e., Red soil, Chick Pea crop, Groundnut crop and Pigeon Pea crop were conducted to validate the algorithm and found a very good match between surface reflectance and atmospherically corrected reflectance for all spectral bands. Further, we aggregated all datasets together and compared the retrieved AWiFS reflectance with

  13. Guided filter and adaptive learning rate based non-uniformity correction algorithm for infrared focal plane array

    NASA Astrophysics Data System (ADS)

    Sheng-Hui, Rong; Hui-Xin, Zhou; Han-Lin, Qin; Rui, Lai; Kun, Qian

    2016-05-01

    Imaging non-uniformity of infrared focal plane array (IRFPA) behaves as fixed-pattern noise superimposed on the image, which affects the imaging quality of infrared system seriously. In scene-based non-uniformity correction methods, the drawbacks of ghosting artifacts and image blurring affect the sensitivity of the IRFPA imaging system seriously and decrease the image quality visibly. This paper proposes an improved neural network non-uniformity correction method with adaptive learning rate. On the one hand, using guided filter, the proposed algorithm decreases the effect of ghosting artifacts. On the other hand, due to the inappropriate learning rate is the main reason of image blurring, the proposed algorithm utilizes an adaptive learning rate with a temporal domain factor to eliminate the effect of image blurring. In short, the proposed algorithm combines the merits of the guided filter and the adaptive learning rate. Several real and simulated infrared image sequences are utilized to verify the performance of the proposed algorithm. The experiment results indicate that the proposed algorithm can not only reduce the non-uniformity with less ghosting artifacts but also overcome the problems of image blurring in static areas.

  14. Fast and precise algorithms for calculating offset correction in single photon counting ASICs built in deep sub-micron technologies

    NASA Astrophysics Data System (ADS)

    Maj, P.

    2014-07-01

    An important trend in the design of readout electronics working in the single photon counting mode for hybrid pixel detectors is to minimize the single pixel area without sacrificing its functionality. This is the reason why many digital and analog blocks are made with the smallest, or next to smallest, transistors possible. This causes a problem with matching among the whole pixel matrix which is acceptable by designers and, of course, it should be corrected with the use of dedicated circuitry, which, by the same rule of minimizing devices, suffers from the mismatch. Therefore, the output of such a correction circuit, controlled by an ultra-small area DAC, is not only a non-linear function, but it is also often non-monotonic. As long as it can be used for proper correction of the DC operation points inside each pixel, it is acceptable, but the time required for correction plays an important role for both chip verification and the design of a big, multi-chip system. Therefore, we present two algorithms: a precise one and a fast one. The first algorithm is based on the noise hits profiles obtained during so called threshold scan procedures. The fast correction procedure is based on the trim DACs scan and it takes less than a minute in a SPC detector systems consisting of several thousands of pixels.

  15. Analytical fan-beam and cone-beam reconstruction algorithms with uniform attenuation correction for SPECT

    NASA Astrophysics Data System (ADS)

    Tang, Qiulin; Zeng, Gengsheng L.; Gullberg, Grant T.

    2005-07-01

    In this paper, we developed an analytical fan-beam reconstruction algorithm that compensates for uniform attenuation in SPECT. The new fan-beam algorithm is in the form of backprojection first, then filtering, and is mathematically exact. The algorithm is based on three components. The first one is the established generalized central-slice theorem, which relates the 1D Fourier transform of a set of arbitrary data and the 2D Fourier transform of the backprojected image. The second one is the fact that the backprojection of the fan-beam measurements is identical to the backprojection of the parallel measurements of the same object with the same attenuator. The third one is the stable analytical reconstruction algorithm for uniformly attenuated Radon data, developed by Metz and Pan. The fan-beam algorithm is then extended into a cone-beam reconstruction algorithm, where the orbit of the focal point of the cone-beam imaging geometry is a circle. This orbit geometry does not satisfy Tuy's condition and the obtained cone-beam algorithm is an approximation. In the cone-beam algorithm, the cone-beam data are first backprojected into the 3D image volume; then a slice-by-slice filtering is performed. This slice-by-slice filtering procedure is identical to that of the fan-beam algorithm. Both the fan-beam and cone-beam algorithms are efficient, and computer simulations are presented. The new cone-beam algorithm is compared with Bronnikov's cone-beam algorithm, and it is shown to have better performance with noisy projections.

  16. An algorithm for the radiometric and atmospheric correction of AVHRR data in the solar reflective channels

    NASA Astrophysics Data System (ADS)

    Teillet, P. M.

    1992-09-01

    Radiometric and atmospheric corrections are formulated with a view to computing vegetation indices such as the Normalized Difference Vegetation Index (NDVI) from surface reflectances rather than the digital signal levels recorded at the sensor. In particular, look-up table (LUT) results from an atmospheric radiative transfer code are used to save time and avoid the complexities of running and maintaining such a code in a production environment. The data flow for radiometric image correction is very similar to commonly used geometric correction data flows. The role of terrain elevation in the atmospheric correction process is discussed and the effect of topography on NDVI is highlighted.

  17. A parallel algorithm for error correction in high-throughput short-read data on CUDA-enabled graphics hardware.

    PubMed

    Shi, Haixiang; Schmidt, Bertil; Liu, Weiguo; Müller-Wittig, Wolfgang

    2010-04-01

    Emerging DNA sequencing technologies open up exciting new opportunities for genome sequencing by generating read data with a massive throughput. However, produced reads are significantly shorter and more error-prone compared to the traditional Sanger shotgun sequencing method. This poses challenges for de novo DNA fragment assembly algorithms in terms of both accuracy (to deal with short, error-prone reads) and scalability (to deal with very large input data sets). In this article, we present a scalable parallel algorithm for correcting sequencing errors in high-throughput short-read data so that error-free reads can be available before DNA fragment assembly, which is of high importance to many graph-based short-read assembly tools. The algorithm is based on spectral alignment and uses the Compute Unified Device Architecture (CUDA) programming model. To gain efficiency we are taking advantage of the CUDA texture memory using a space-efficient Bloom filter data structure for spectrum membership queries. We have tested the runtime and accuracy of our algorithm using real and simulated Illumina data for different read lengths, error rates, input sizes, and algorithmic parameters. Using a CUDA-enabled mass-produced GPU (available for less than US$400 at any local computer outlet), this results in speedups of 12-84 times for the parallelized error correction, and speedups of 3-63 times for both sequential preprocessing and parallelized error correction compared to the publicly available Euler-SR program. Our implementation is freely available for download from http://cuda-ec.sourceforge.net . PMID:20426693

  18. An experimental study of the scatter correction by using a beam-stop-array algorithm with digital breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Kim, Ye-seul; Park, Hye-Suk; Kim, Hee-Joung; Choi, Young-Wook; Choi, Jae-Gu

    2014-12-01

    Digital breast tomosynthesis (DBT) is a technique that was developed to overcome the limitations of conventional digital mammography by reconstructing slices through the breast from projections acquired at different angles. In developing and optimizing DBT, The x-ray scatter reduction technique remains a significant challenge due to projection geometry and radiation dose limitations. The most common approach to scatter reduction is a beam-stop-array (BSA) algorithm; however, this method raises concerns regarding the additional exposure involved in acquiring the scatter distribution. The compressed breast is roughly symmetric, and the scatter profiles from projections acquired at axially opposite angles are similar to mirror images. The purpose of this study was to apply the BSA algorithm with only two scans with a beam stop array, which estimates the scatter distribution with minimum additional exposure. The results of the scatter correction with angular interpolation were comparable to those of the scatter correction with all scatter distributions at each angle. The exposure increase was less than 13%. This study demonstrated the influence of the scatter correction obtained by using the BSA algorithm with minimum exposure, which indicates its potential for practical applications.

  19. Atmospheric Correction, Vicarious Calibration and Development of Algorithms for Quantifying Cyanobacteria Blooms from Oceansat-1 OCM Satellite Data

    NASA Astrophysics Data System (ADS)

    Dash, P.; Walker, N. D.; Mishra, D. R.; Hu, C.; D'Sa, E. J.; Pinckney, J. L.

    2011-12-01

    Cyanobacteria represent a major harmful algal group in fresh to brackish water environments. Lac des Allemands, a freshwater lake located southwest of New Orleans, Louisiana on the upper end of the Barataria Estuary, provides a natural laboratory for remote characterization of cyanobacteria blooms because of their seasonal occurrence. The Ocean Colour Monitor (OCM) sensor provides radiance measurements similar to SeaWiFS but with higher spatial resolution. However, OCM does not have a standard atmospheric correction procedure, and it is difficult to find a detailed description of the entire atmospheric correction procedure for ocean (or lake) in one place. Atmospheric correction of satellite data over small lakes and estuaries (Case 2 waters) is also challenging due to difficulties in estimation of aerosol scattering accurately in these areas. Therefore, an atmospheric correction procedure was written for processing OCM data, based on the extensive work done for SeaWiFS. Since OCM-retrieved radiances were abnormally low in the blue wavelength region, a vicarious calibration procedure was also developed. Empirical inversion algorithms were developed to convert the OCM remote sensing reflectance (Rrs) at bands centered at 510.6 and 556.4 nm to concentrations of phycocyanin (PC), the primary cyanobacterial pigment. A holistic approach was followed to minimize the influence of other optically active constituents on the PC algorithm. Similarly, empirical algorithms to estimate chlorophyll a (Chl a) concentrations were developed using OCM bands centered at 556.4 and 669 nm. The best PC algorithm (R2=0.7450, p<0.0001, n=72) yielded a root mean square error (RMSE) of 36.92 μg/L with a relative RMSE of 10.27% (PC from 2.75-363.50 μg/L, n=48). The best algorithm for Chl a (R2=0.7510, p<0.0001, n=72) produced an RMSE of 31.19 μg/L with a relative RMSE of 16.56% (Chl a from 9.46-212.76 μg/L, n=48). While more field data are required to further validate the long

  20. Ablation algorithms and corneal asphericity in myopic correction with excimer lasers

    NASA Astrophysics Data System (ADS)

    Iroshnikov, Nikita G.; Larichev, Andrey V.; Yablokov, Michail G.

    2007-06-01

    The purpose of this work is studying a corneal asphericity change after a myopic refractive correction by mean of excimer lasers. As the ablation profile shape plays a key role in the post-op corneal asphericity, ablation profiles of recent lasers should be studied. The other task of this research was to analyze operation (LASIK) outcomes of one of the lasers with generic spherical ablation profile and to compare an asphericity change with theoretical predictions. The several correction methods, like custom generated aspherical profiles, may be utilized for mitigation of unwanted effects of asphericity change. Here we also present preliminary results of such correction for one of the excimer lasers.

  1. Spectrum correction algorithm for detectors in airborne radioactivity monitoring equipment NH-UAV based on a ratio processing method

    NASA Astrophysics Data System (ADS)

    Cao, Ye; Tang, Xiao-Bin; Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng; Chen, Da

    2015-10-01

    The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr3) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr3 detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R2=0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible.

  2. Dose-calculation algorithms in the context of inhomogeneity corrections for high energy photon beams

    SciTech Connect

    Papanikolaou, Niko; Stathakis, Sotirios

    2009-10-15

    Radiation therapy has witnessed a plethora of innovations and developments in the past 15 years. Since the introduction of computed tomography for treatment planning there has been a steady introduction of new methods to refine treatment delivery. Imaging continues to be an integral part of the planning, but also the delivery, of modern radiotherapy. However, all the efforts of image guided radiotherapy, intensity-modulated planning and delivery, adaptive radiotherapy, and everything else that we pride ourselves in having in the armamentarium can fall short, unless there is an accurate dose-calculation algorithm. The agreement between the calculated and delivered doses is of great significance in radiation therapy since the accuracy of the absorbed dose as prescribed determines the clinical outcome. Dose-calculation algorithms have evolved greatly over the years in an effort to be more inclusive of the effects that govern the true radiation transport through the human body. In this Vision 20/20 paper, we look back to see how it all started and where things are now in terms of dose algorithms for photon beams and the inclusion of tissue heterogeneities. Convolution-superposition algorithms have dominated the treatment planning industry for the past few years. Monte Carlo techniques have an inherent accuracy that is superior to any other algorithm and as such will continue to be the gold standard, along with measurements, and maybe one day will be the algorithm of choice for all particle treatment planning in radiation therapy.

  3. Evaluation of Residual Static Corrections by Hybrid Genetic Algorithm Steepest Ascent Autostatics Inversion.Application southern Algerian fields

    NASA Astrophysics Data System (ADS)

    Eladj, Said; bansir, fateh; ouadfeul, sid Ali

    2016-04-01

    The application of genetic algorithm starts with an initial population of chromosomes representing a "model space". Chromosome chains are preferentially Reproduced based on Their fitness Compared to the total population. However, a good chromosome has a Greater opportunity to Produce offspring Compared To other chromosomes in the population. The advantage of the combination HGA / SAA is the use of a global search approach on a large population of local maxima to Improve Significantly the performance of the method. To define the parameters of the Hybrid Genetic Algorithm Steepest Ascent Auto Statics (HGA / SAA) job, we Evaluated by testing in the first stage of "Steepest Ascent," the optimal parameters related to the data used. 1- The number of iterations "Number of hill climbing iteration" is equal to 40 iterations. This parameter defines the participation of the algorithm "SA", in this hybrid approach. 2- The minimum eigenvalue for SA '= 0.8. This is linked to the quality of data and S / N ratio. To find an implementation performance of hybrid genetic algorithms in the inversion for estimating of the residual static corrections, tests Were Performed to determine the number of generation of HGA / SAA. Using the values of residual static corrections already calculated by the Approaches "SAA and CSAA" learning has Proved very effective in the building of the cross-correlation table. To determine the optimal number of generation, we Conducted a series of tests ranging from [10 to 200] generations. The application on real seismic data in southern Algeria allowed us to judge the performance and capacity of the inversion with this hybrid method "HGA / SAA". This experience Clarified the influence of the corrections quality estimated from "SAA / CSAA" and the optimum number of generation hybrid genetic algorithm "HGA" required to have a satisfactory performance. Twenty (20) generations Were enough to Improve continuity and resolution of seismic horizons. This Will allow

  4. Depth-resolved analytical model and correction algorithm for photothermal optical coherence tomography

    PubMed Central

    Lapierre-Landry, Maryse; Tucker-Schwartz, Jason M.; Skala, Melissa C.

    2016-01-01

    Photothermal OCT (PT-OCT) is an emerging molecular imaging technique that occupies a spatial imaging regime between microscopy and whole body imaging. PT-OCT would benefit from a theoretical model to optimize imaging parameters and test image processing algorithms. We propose the first analytical PT-OCT model to replicate an experimental A-scan in homogeneous and layered samples. We also propose the PT-CLEAN algorithm to reduce phase-accumulation and shadowing, two artifacts found in PT-OCT images, and demonstrate it on phantoms and in vivo mouse tumors. PMID:27446693

  5. The variable refractive index correction algorithm based on a stereo light microscope

    NASA Astrophysics Data System (ADS)

    Pei, W.; Zhu, Y. Y.

    2010-02-01

    Refraction occurs at least twice on both the top and the bottom surfaces of the plastic plate covering the micro channel in a microfluidic chip. The refraction and the nonlinear model of a stereo light microscope (SLM) may severely affect measurement accuracy. In this paper, we study the correlation between optical paths of the SLM and present an algorithm to adjust the refractive index based on the SLM. Our algorithm quantizes the influence of cover plate and double optical paths on the measurement accuracy, and realizes non-destructive, non-contact and precise 3D measurement of a hyaloid and closed container.

  6. Classifying Human Leg Motions with Uniaxial Piezoelectric Gyroscopes

    PubMed Central

    Tunçel, Orkun; Altun, Kerem; Barshan, Billur

    2009-01-01

    This paper provides a comparative study on the different techniques of classifying human leg motions that are performed using two low-cost uniaxial piezoelectric gyroscopes worn on the leg. A number of feature sets, extracted from the raw inertial sensor data in different ways, are used in the classification process. The classification techniques implemented and compared in this study are: Bayesian decision making (BDM), a rule-based algorithm (RBA) or decision tree, least-squares method (LSM), k-nearest neighbor algorithm (k-NN), dynamic time warping (DTW), support vector machines (SVM), and artificial neural networks (ANN). A performance comparison of these classification techniques is provided in terms of their correct differentiation rates, confusion matrices, computational cost, and training and storage requirements. Three different cross-validation techniques are employed to validate the classifiers. The results indicate that BDM, in general, results in the highest correct classification rate with relatively small computational cost. PMID:22291521

  7. New baseline correction algorithm for text-line recognition with bidirectional recurrent neural networks

    NASA Astrophysics Data System (ADS)

    Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle

    2013-04-01

    Many preprocessing techniques have been proposed for isolated word recognition. However, recently, recognition systems have dealt with text blocks and their compound text lines. In this paper, we propose a new preprocessing approach to efficiently correct baseline skew and fluctuations. Our approach is based on a sliding window within which the vertical position of the baseline is estimated. Segmentation of text lines into subparts is, thus, avoided. Experiments conducted on a large publicly available database (Rimes), with a BLSTM (bidirectional long short-term memory) recurrent neural network recognition system, show that our baseline correction approach highly improves performance.

  8. The Use of Anatomical Information for Molecular Image Reconstruction Algorithms: Attenuation/Scatter Correction, Motion Compensation, and Noise Reduction.

    PubMed

    Chun, Se Young

    2016-03-01

    PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples. PMID:26941855

  9. An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT

    NASA Astrophysics Data System (ADS)

    Poludniowski, G.; Evans, P. M.; Hansen, V. N.; Webb, S.

    2009-06-01

    A new method is proposed for scatter-correction of cone-beam CT images. A coarse reconstruction is used in initial iteration steps. Modelling of the x-ray tube spectra and detector response are included in the algorithm. Photon diffusion inside the imaging subject is calculated using the Monte Carlo method. Photon scoring at the detector is calculated using forced detection to a fixed set of node points. The scatter profiles are then obtained by linear interpolation. The algorithm is referred to as the coarse reconstruction and fixed detection (CRFD) technique. Scatter predictions are quantitatively validated against a widely used general-purpose Monte Carlo code: BEAMnrc/EGSnrc (NRCC, Canada). Agreement is excellent. The CRFD algorithm was applied to projection data acquired with a Synergy XVI CBCT unit (Elekta Limited, Crawley, UK), using RANDO and Catphan phantoms (The Phantom Laboratory, Salem NY, USA). The algorithm was shown to be effective in removing scatter-induced artefacts from CBCT images, and took as little as 2 min on a desktop PC. Image uniformity was greatly improved as was CT-number accuracy in reconstructions. This latter improvement was less marked where the expected CT-number of a material was very different to the background material in which it was embedded.

  10. Natural and Unnatural Oil Layers on the Surface of the Gulf of Mexico Detected and Quantified in Synthetic Aperture RADAR Images with Texture Classifying Neural Network Algorithms

    NASA Astrophysics Data System (ADS)

    MacDonald, I. R.; Garcia-Pineda, O. G.; Morey, S. L.; Huffer, F.

    2011-12-01

    Effervescent hydrocarbons rise naturally from hydrocarbon seeps in the Gulf of Mexico and reach the ocean surface. This oil forms thin (~0.1 μm) layers that enhance specular reflectivity and have been widely used to quantify the abundance and distribution of natural seeps using synthetic aperture radar (SAR). An analogous process occurred at a vastly greater scale for oil and gas discharged from BP's Macondo well blowout. SAR data allow direct comparison of the areas of the ocean surface covered by oil from natural sources and the discharge. We used a texture classifying neural network algorithm to quantify the areas of naturally occurring oil-covered water in 176 SAR image collections from the Gulf of Mexico obtained between May 1997 and November 2007, prior to the blowout. Separately we also analyzed 36 SAR images collections obtained between 26 April and 30 July, 2010 while the discharged oil was visible in the Gulf of Mexico. For the naturally occurring oil, we removed pollution events and transient oceanographic effects by including only the reflectance anomalies that that recurred in the same locality over multiple images. We measured the area of oil layers in a grid of 10x10 km cells covering the entire Gulf of Mexico. Floating oil layers were observed in only a fraction of the total Gulf area amounting to 1.22x10^5 km^2. In a bootstrap sample of 2000 replications, the combined average area of these layers was 7.80x10^2 km^2 (sd 86.03). For a regional comparison, we divided the Gulf of Mexico into four quadrates along 90° W longitude, and 25° N latitude. The NE quadrate, where the BP discharge occurred, received on average 7.0% of the total natural seepage in the Gulf of Mexico (5.24 x10^2 km^2, sd 21.99); the NW quadrate received on average 68.0% of this total (5.30 x10^2 km^2, sd 69.67). The BP blowout occurred in the NE quadrate of the Gulf of Mexico; discharged oil that reached the surface drifted over a large area north of 25° N. Performing a

  11. Lorentz force correction to the Boltzmann radiation transport equation and its implications for Monte Carlo algorithms

    NASA Astrophysics Data System (ADS)

    Bouchard, Hugo; Bielajew, Alex

    2015-07-01

    To establish a theoretical framework for generalizing Monte Carlo transport algorithms by adding external electromagnetic fields to the Boltzmann radiation transport equation in a rigorous and consistent fashion. Using first principles, the Boltzmann radiation transport equation is modified by adding a term describing the variation of the particle distribution due to the Lorentz force. The implications of this new equation are evaluated by investigating the validity of Fano’s theorem. Additionally, Lewis’ approach to multiple scattering theory in infinite homogeneous media is redefined to account for the presence of external electromagnetic fields. The equation is modified and yields a description consistent with the deterministic laws of motion as well as probabilistic methods of solution. The time-independent Boltzmann radiation transport equation is generalized to account for the electromagnetic forces in an additional operator similar to the interaction term. Fano’s and Lewis’ approaches are stated in this new equation. Fano’s theorem is found not to apply in the presence of electromagnetic fields. Lewis’ theory for electron multiple scattering and moments, accounting for the coupling between the Lorentz force and multiple elastic scattering, is found. However, further investigation is required to develop useful algorithms for Monte Carlo and deterministic transport methods. To test the accuracy of Monte Carlo transport algorithms in the presence of electromagnetic fields, the Fano cavity test, as currently defined, cannot be applied. Therefore, new tests must be designed for this specific application. A multiple scattering theory that accurately couples the Lorentz force with elastic scattering could improve Monte Carlo efficiency. The present study proposes a new theoretical framework to develop such algorithms.

  12. Lorentz force correction to the Boltzmann radiation transport equation and its implications for Monte Carlo algorithms.

    PubMed

    Bouchard, Hugo; Bielajew, Alex

    2015-07-01

    To establish a theoretical framework for generalizing Monte Carlo transport algorithms by adding external electromagnetic fields to the Boltzmann radiation transport equation in a rigorous and consistent fashion. Using first principles, the Boltzmann radiation transport equation is modified by adding a term describing the variation of the particle distribution due to the Lorentz force. The implications of this new equation are evaluated by investigating the validity of Fano's theorem. Additionally, Lewis' approach to multiple scattering theory in infinite homogeneous media is redefined to account for the presence of external electromagnetic fields. The equation is modified and yields a description consistent with the deterministic laws of motion as well as probabilistic methods of solution. The time-independent Boltzmann radiation transport equation is generalized to account for the electromagnetic forces in an additional operator similar to the interaction term. Fano's and Lewis' approaches are stated in this new equation. Fano's theorem is found not to apply in the presence of electromagnetic fields. Lewis' theory for electron multiple scattering and moments, accounting for the coupling between the Lorentz force and multiple elastic scattering, is found. However, further investigation is required to develop useful algorithms for Monte Carlo and deterministic transport methods. To test the accuracy of Monte Carlo transport algorithms in the presence of electromagnetic fields, the Fano cavity test, as currently defined, cannot be applied. Therefore, new tests must be designed for this specific application. A multiple scattering theory that accurately couples the Lorentz force with elastic scattering could improve Monte Carlo efficiency. The present study proposes a new theoretical framework to develop such algorithms. PMID:26061045

  13. A smart phone-based robust correction algorithm for the colorimetric detection of Urinary Tract Infection.

    PubMed

    Karlsen, Haakon; Tao Dong

    2015-08-01

    This paper presents the preliminary work of developing a smart phone based application for colorimetric detection of Urinary Tract Infection. The purpose is to make a smart phone function as a practical point-of-care device for nurses or medical personnel without access to strip readers. The main challenge is the constancy of camera color perception across different illuminations and devices, which is the first step towards a practical solution without additional equipment. A reported black and white reference correction and a comprehensive color image normalization have been utilized in this work. Comprehensive color image normalization appears to be quite effective at correcting the difference in perceived color due to different illumination, and is therefore a candidate for inclusion in the further work. PMID:26736494

  14. Description and comparison of algorithms for correcting anisotropic magnification in cryo-EM images.

    PubMed

    Zhao, Jianhua; Brubaker, Marcus A; Benlekbir, Samir; Rubinstein, John L

    2015-11-01

    Single particle electron cryomicroscopy (cryo-EM) allows for structures of proteins and protein complexes to be determined from images of non-crystalline specimens. Cryo-EM data analysis requires electron microscope images of randomly oriented ice-embedded protein particles to be rotated and translated to allow for coherent averaging when calculating three-dimensional (3D) structures. Rotation of 2D images is usually done with the assumption that the magnification of the electron microscope is the same in all directions. However, due to electron optical aberrations, this condition is not met with some electron microscopes when used with the settings necessary for cryo-EM with a direct detector device (DDD) camera. Correction of images by linear interpolation in real space has allowed high-resolution structures to be calculated from cryo-EM images for symmetric particles. Here we describe and compare a simple real space method, a simple Fourier space method, and a somewhat more sophisticated Fourier space method to correct images for a measured anisotropy in magnification. Further, anisotropic magnification causes contrast transfer function (CTF) parameters estimated from image power spectra to have an apparent systematic astigmatism. To address this problem we develop an approach to adjust CTF parameters measured from distorted images so that they can be used with corrected images. The effect of anisotropic magnification on CTF parameters provides a simple way of detecting magnification anisotropy in cryo-EM datasets. PMID:26087140

  15. An accurate and efficient algorithm for Faraday rotation corrections for spaceborne microwave radiometers

    NASA Astrophysics Data System (ADS)

    Singh, Malkiat; Bettenhausen, Michael H.

    2011-08-01

    Faraday rotation changes the polarization plane of linearly polarized microwaves which propagate through the ionosphere. To correct for ionospheric polarization error, it is necessary to have electron density profiles on a global scale that represent the ionosphere in real time. We use raytrace through the combined models of ionospheric conductivity and electron density (ICED), Bent, and Gallagher models (RIBG model) to specify the ionospheric conditions by ingesting the GPS data from observing stations that are as close as possible to the observation time and location of the space system for which the corrections are required. To accurately calculate Faraday rotation corrections, we also utilize the raytrace utility of the RIBG model instead of the normal shell model assumption for the ionosphere. We use WindSat data, which exhibits a wide range of orientations of the raypath and a high data rate of observations, to provide a realistic data set for analysis. The standard single-shell models at 350 and 400 km are studied along with a new three-shell model and compared with the raytrace method for computation time and accuracy. We have compared the Faraday results obtained with climatological (International Reference Ionosphere and RIBG) and physics-based (Global Assimilation of Ionospheric Measurements) ionospheric models. We also study the impact of limitations in the availability of GPS data on the accuracy of the Faraday rotation calculations.

  16. Acceleration algorithm for constant-statistics method applied to the nonuniformity correction of infrared sequences

    NASA Astrophysics Data System (ADS)

    Jara Chavez, A. G.; Torres Vicencio, F. O.

    2015-03-01

    Non-uniformity noise, it was, it is, and it will probably be one of the most non-desired attached companion of the infrared focal plane array (IRFPA) data. We present a higher order filter where the key advantage is based in its capacity to estimates the detection parameters and thus to compensate it for fixed pattern noise, as an enhancement of Constant Statistics (CS) theory. This paper shows a technique to improve the convergence in accelerated way for CS (AACS: Acceleration Algorithm for Constant Statistics). The effectiveness of this method is demonstrated by using simulated infrared video sequences and several real infrared video sequences obtained using two infrared cameras.

  17. Phase-correction algorithm of deformed grating images in the depth measurement of weld pool surface in gas tungsten arc welding

    NASA Astrophysics Data System (ADS)

    Wei, Yiqing; Liu, Nansheng; Hu, Xian; Ai, Xiaopu

    2011-05-01

    The principle and system structure of the depth measurement of weld pool surface in tungsten insert gas (TIG) welding are first introduced in the paper, then the problem of the common phase lines is studied. We analyze the causes and characteristics of the phase lines, and propose a phase correction method based on line ratio. The paper presents the principle and detail processing steps of this phase correction algorithm, and then the effectiveness and processing characteristics of the algorithm are verified by simulation. Finally, the algorithm is applied to phase processing in the depth measurement of the TIG weld pool surface and obtains satisfying results.

  18. A Physically Based Algorithm for Non-Blackbody Correction of Cloud-Top Temperature and Application to Convection Study

    NASA Technical Reports Server (NTRS)

    Wang, Chunpeng; Lou, Zhengzhao Johnny; Chen, Xiuhong; Zeng, Xiping; Tao, Wei-Kuo; Huang, Xianglei

    2014-01-01

    Cloud-top temperature (CTT) is an important parameter for convective clouds and is usually different from the 11-micrometers brightness temperature due to non-blackbody effects. This paper presents an algorithm for estimating convective CTT by using simultaneous passive [Moderate Resolution Imaging Spectroradiometer (MODIS)] and active [CloudSat 1 Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO)] measurements of clouds to correct for the non-blackbody effect. To do this, a weighting function of the MODIS 11-micrometers band is explicitly calculated by feeding cloud hydrometer profiles from CloudSat and CALIPSO retrievals and temperature and humidity profiles based on ECMWF analyses into a radiation transfer model.Among 16 837 tropical deep convective clouds observed by CloudSat in 2008, the averaged effective emission level (EEL) of the 11-mm channel is located at optical depth; approximately 0.72, with a standard deviation of 0.3. The distance between the EEL and cloud-top height determined by CloudSat is shown to be related to a parameter called cloud-top fuzziness (CTF), defined as the vertical separation between 230 and 10 dBZ of CloudSat radar reflectivity. On the basis of these findings a relationship is then developed between the CTF and the difference between MODIS 11-micrometers brightness temperature and physical CTT, the latter being the non-blackbody correction of CTT. Correction of the non-blackbody effect of CTT is applied to analyze convective cloud-top buoyancy. With this correction, about 70% of the convective cores observed by CloudSat in the height range of 6-10 km have positive buoyancy near cloud top, meaning clouds are still growing vertically, although their final fate cannot be determined by snapshot observations.

  19. Development of a Multiview Time Domain Imaging Algorithm (MTDI) with a Fermat Correction

    SciTech Connect

    Fisher, K A; Lehman, S K; Chambers, D H

    2004-09-22

    An imaging algorithm is presented based on the standard assumption that the total scattered field can be separated into an elastic component with monopole like dependence and an inertial component with a dipole like dependence. The resulting inversion generates two separate image maps corresponding to the monopole and dipole terms of the forward model. The complexity of imaging flaws and defects in layered elastic media is further compounded by the existence of high contrast gradients in either sound speed and/or density from layer to layer. To compensate for these gradients, we have incorporated Fermat's method of least time into our forward model to determine the appropriate delays between individual source-receiver pairs. Preliminary numerical and experimental results are in good agreement with each other.

  20. Adaptation of flux-corrected transport algorithms for modeling dusty flows

    NASA Astrophysics Data System (ADS)

    Fry, M. A.; Book, D. L.

    Blast wave phenomena include reactive and two phase flows resulting from the motion of chemical explosion products. When the blast wave interacts with structural surfaces (external discontinuities), multiple reflections and refractions occur from both external and internal discontinuities. The most recent version of the Flux Corrected Transport (FCT) convective-equation solver has been used both in one and two dimensions to simulate chemical explosive blast waves reflecting from planar structures for yields ranging from 8 lb to 600 tons. One can relate the strength of the second reflected peak to the sharpness of the contact discontinuity, and thus measure the capability to predict all the salient features of the blast wave. The flow patterns obtained reveal four different vortices, two forward and two reversed. Their effect on the notion of tracer particles has been studied in order to determine the motion of (1) HE detonation products and (2) dust scoured up from the ground.

  1. Efficient fast heuristic algorithms for minimum error correction haplotyping from SNP fragments.

    PubMed

    Anaraki, Maryam Pourkamali; Sadeghi, Mehdi

    2014-01-01

    Availability of complete human genome is a crucial factor for genetic studies to explore possible association between the genome and complex diseases. Haplotype, as a set of single nucleotide polymorphisms (SNPs) on a single chromosome, is believed to contain promising data for disease association studies, detecting natural positive selection and recombination hotspots. Various computational methods for haplotype reconstruction from aligned fragment of SNPs have already been proposed. This study presents a novel approach to obtain paternal and maternal haplotypes form the SNP fragments on minimum error correction (MEC) model. Reconstructing haplotypes in MEC model is an NP-hard problem. Therefore, our proposed methods employ two fast and accurate clustering techniques as the core of their procedure to efficiently solve this ill-defined problem. The assessment of our approaches, compared to conventional methods, on two real benchmark datasets, i.e., ACE and DALY, proves the efficiency and accuracy. PMID:25539847

  2. Stack filter classifiers

    SciTech Connect

    Porter, Reid B; Hush, Don

    2009-01-01

    Just as linear models generalize the sample mean and weighted average, weighted order statistic models generalize the sample median and weighted median. This analogy can be continued informally to generalized additive modeels in the case of the mean, and Stack Filters in the case of the median. Both of these model classes have been extensively studied for signal and image processing but it is surprising to find that for pattern classification, their treatment has been significantly one sided. Generalized additive models are now a major tool in pattern classification and many different learning algorithms have been developed to fit model parameters to finite data. However Stack Filters remain largely confined to signal and image processing and learning algorithms for classification are yet to be seen. This paper is a step towards Stack Filter Classifiers and it shows that the approach is interesting from both a theoretical and a practical perspective.

  3. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-09-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose

  4. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    PubMed

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  5. A study of the dosimetry of small field photon beams used in intensity-modulated radiation therapy in inhomogeneous media: Monte Carlo simulations and algorithm comparisons and corrections

    NASA Astrophysics Data System (ADS)

    Jones, Andrew Osler

    There is an increasing interest in the use of inhomogeneity corrections for lung, air, and bone in radiotherapy treatment planning. Traditionally, corrections based on physical density have been used. Modern algorithms use the electron density derived from CT images. Small fields are used in both conformal radiotherapy and IMRT, however their beam characteristics in inhomogeneous media have not been extensively studied. This work compares traditional and modern treatment planning algorithms to Monte Carlo simulations in and near low-density inhomogeneities. Field sizes ranging from 0.5 cm to 5 cm in diameter are projected onto a phantom containing inhomogeneities and depth dose curves are compared. Comparisons of the Dose Perturbation Factors (DPF) are presented as functions of density and field size. Dose Correction Factors (DCF), which scale the algorithms to the Monte Carlo data, are compared for each algorithm. Physical scaling algorithms such as Batho and Equivalent Pathlength (EPL) predict an increase in dose for small fields passing through lung tissue, where Monte Carlo simulations show a sharp dose drop. The physical model-based collapsed cone convolution (CCC) algorithm correctly predicts the dose drop, but does not accurately predict the magnitude. Because the model-based algorithms do not correctly account for the change in backscatter, the dose drop predicted by CCC occurs further downstream compared to that predicted by the Monte Carlo simulations. Beyond the tissue inhomogeneity all of the algorithms studied predict dose distributions in close agreement with Monte Carlo simulations. Dose-volume relationships are important in understanding the effects of radiation to the lung. Dose within the lung is affected by a complex function of beam energy, lung tissue density, and field size. Dose algorithms vary in their abilities to correctly predict the dose to the lung tissue. A thorough analysis of the effects of density, and field size on dose to the lung

  6. Noise correction on LANDSAT images using a spline-like algorithm

    NASA Technical Reports Server (NTRS)

    Vijaykumar, N. L. (Principal Investigator); Dias, L. A. V.

    1985-01-01

    Many applications using LANDSAT images face a dilemma: the user needs a certain scene (for example, a flooded region), but that particular image may present interference or noise in form of horizontal stripes. During automatic analysis, this interference or noise may cause false readings of the region of interest. In order to minimize this interference or noise, many solutions are used, for instane, that of using the average (simple or weighted) values of the neighboring vertical points. In the case of high interference (more than one adjacent line lost) the method of averages may not suit the desired purpose. The solution proposed is to use a spline-like algorithm (weighted splines). This type of interpolation is simple to be computer implemented, fast, uses only four points in each interval, and eliminates the necessity of solving a linear equation system. In the normal mode of operation, the first and second derivatives of the solution function are continuous and determined by data points, as in cubic splines. It is possible, however, to impose the values of the first derivatives, in order to account for shapr boundaries, without increasing the computational effort. Some examples using the proposed method are also shown.

  7. Adaptive scene-based correction algorithm for removal of residual fixed pattern noise in microgrid image data

    NASA Astrophysics Data System (ADS)

    Ratliff, Bradley M.; LeMaster, Daniel A.

    2012-06-01

    Pixel-to-pixel response nonuniformity is a common problem that affects nearly all focal plane array sensors. This results in a frame-to-frame fixed pattern noise (FPN) that causes an overall degradation in collected data. FPN is often compensated for through the use of blackbody calibration procedures; however, FPN is a particularly challenging problem because the detector responsivities drift relative to one another in time, requiring that the sensor be recalibrated periodically. The calibration process is obstructive to sensor operation and is therefore only performed at discrete intervals in time. Thus, any drift that occurs between calibrations (along with error in the calibration sources themselves) causes varying levels of residual calibration error to be present in the data at all times. Polarimetric microgrid sensors are particularly sensitive to FPN due to the spatial differencing involved in estimating the Stokes vector images. While many techniques exist in the literature to estimate FPN for conventional video sensors, few have been proposed to address the problem in microgrid imaging sensors. Here we present a scene-based nonuniformity correction technique for microgrid sensors that is able to reduce residual fixed pattern noise while preserving radiometry under a wide range of conditions. The algorithm requires a low number of temporal data samples to estimate the spatial nonuniformity and is computationally efficient. We demonstrate the algorithm's performance using real data from the AFRL PIRATE and University of Arizona LWIR microgrid sensors.

  8. An algorithm to correct 2D near-infrared fluorescence signals using 3D intravascular ultrasound architectural information

    NASA Astrophysics Data System (ADS)

    Mallas, Georgios; Brooks, Dana H.; Rosenthal, Amir; Vinegoni, Claudio; Calfon, Marcella A.; Razansky, R. Nika; Jaffer, Farouc A.; Ntziachristos, Vasilis

    2011-03-01

    Intravascular Near-Infrared Fluorescence (NIRF) imaging is a promising imaging modality to image vessel biology and high-risk plaques in vivo. We have developed a NIRF fiber optic catheter and have presented the ability to image atherosclerotic plaques in vivo, using appropriate NIR fluorescent probes. Our catheter consists of a 100/140 μm core/clad diameter housed in polyethylene tubing, emitting NIR laser light at a 90 degree angle compared to the fiber's axis. The system utilizes a rotational and a translational motor for true 2D imaging and operates in conjunction with a coaxial intravascular ultrasound (IVUS) device. IVUS datasets provide 3D images of the internal structure of arteries and are used in our system for anatomical mapping. Using the IVUS images, we are building an accurate hybrid fluorescence-IVUS data inversion scheme that takes into account photon propagation through the blood filled lumen. This hybrid imaging approach can then correct for the non-linear dependence of light intensity on the distance of the fluorescence region from the fiber tip, leading to quantitative imaging. The experimental and algorithmic developments will be presented and the effectiveness of the algorithm showcased with experimental results in both saline and blood-like preparations. The combined structural and molecular information obtained from these two imaging modalities are positioned to enable the accurate diagnosis of biologically high-risk atherosclerotic plaques in the coronary arteries that are responsible for heart attacks.

  9. Baseflow separation based on a meteorology-corrected nonlinear reservoir algorithm in a typical rainy agricultural watershed

    NASA Astrophysics Data System (ADS)

    He, Shengjia; Li, Shuang; Xie, Runting; Lu, Jun

    2016-04-01

    A baseflow separation model called meteorology-corrected nonlinear reservoir algorithm (MNRA) was developed by combining nonlinear reservoir algorithm with a meteorological regression model, in which the effects of meteorological factors on daily baseflow recession were fully expressed. Using MNRA and the monitored data of daily streamflow and meteorological factors (including precipitation, evaporation, wind speed, water vapor pressure and relative humidity) from 2003 to 2012, we determined the daily, monthly, and yearly variations in baseflow from ChangLe River watershed, a typical rainy agricultural watershed in eastern China. Results showed that the estimated annual baseflow of the ChangLe River watershed varied from 18.8 cm (2004) to 61.9 cm (2012) with an average of 35.7 cm, and the baseflow index (the ratio of baseflow to streamflow) varied from 0.58 (2007) to 0.74 (2003) with an average of 0.65. Comparative analysis of different methods showed that the meteorological regression statistical model was a better alternative to the Fourier fitted curve for daily recession parameter estimation. Thus, the reliability and accuracy of the baseflow separation was obviously improved by MNRA, i.e., the Nash-Sutcliffe efficiency increased from 0.90 to 0.98. Compared with the Kalinin's and Eckhardt's recursive digital filter methods, the MNRA approach could usually be more sensitive for baseflow response to precipitation and obtained a higher goodness-of-fit for streamflow recession, especially in the area with high-level shallow groundwater and frequent rain.

  10. SU-E-T-477: An Efficient Dose Correction Algorithm Accounting for Tissue Heterogeneities in LDR Brachytherapy

    SciTech Connect

    Mashouf, S; Lai, P; Karotki, A; Keller, B; Beachey, D; Pignol, J

    2014-06-01

    Purpose: Seed brachytherapy is currently used for adjuvant radiotherapy of early stage prostate and breast cancer patients. The current standard for calculation of dose surrounding the brachytherapy seeds is based on American Association of Physicist in Medicine Task Group No. 43 (TG-43 formalism) which generates the dose in homogeneous water medium. Recently, AAPM Task Group No. 186 emphasized the importance of accounting for tissue heterogeneities. This can be done using Monte Carlo (MC) methods, but it requires knowing the source structure and tissue atomic composition accurately. In this work we describe an efficient analytical dose inhomogeneity correction algorithm implemented using MIM Symphony treatment planning platform to calculate dose distributions in heterogeneous media. Methods: An Inhomogeneity Correction Factor (ICF) is introduced as the ratio of absorbed dose in tissue to that in water medium. ICF is a function of tissue properties and independent of source structure. The ICF is extracted using CT images and the absorbed dose in tissue can then be calculated by multiplying the dose as calculated by the TG-43 formalism times ICF. To evaluate the methodology, we compared our results with Monte Carlo simulations as well as experiments in phantoms with known density and atomic compositions. Results: The dose distributions obtained through applying ICF to TG-43 protocol agreed very well with those of Monte Carlo simulations as well as experiments in all phantoms. In all cases, the mean relative error was reduced by at least 50% when ICF correction factor was applied to the TG-43 protocol. Conclusion: We have developed a new analytical dose calculation method which enables personalized dose calculations in heterogeneous media. The advantages over stochastic methods are computational efficiency and the ease of integration into clinical setting as detailed source structure and tissue segmentation are not needed. University of Toronto, Natural Sciences and

  11. A novel strategy for classifying the output from an in silico vaccine discovery pipeline for eukaryotic pathogens using machine learning algorithms

    PubMed Central

    2013-01-01

    Background An in silico vaccine discovery pipeline for eukaryotic pathogens typically consists of several computational tools to predict protein characteristics. The aim of the in silico approach to discovering subunit vaccines is to use predicted characteristics to identify proteins which are worthy of laboratory investigation. A major challenge is that these predictions are inherent with hidden inaccuracies and contradictions. This study focuses on how to reduce the number of false candidates using machine learning algorithms rather than relying on expensive laboratory validation. Proteins from Toxoplasma gondii, Plasmodium sp., and Caenorhabditis elegans were used as training and test datasets. Results The results show that machine learning algorithms can effectively distinguish expected true from expected false vaccine candidates (with an average sensitivity and specificity of 0.97 and 0.98 respectively), for proteins observed to induce immune responses experimentally. Conclusions Vaccine candidates from an in silico approach can only be truly validated in a laboratory. Given any in silico output and appropriate training data, the number of false candidates allocated for validation can be dramatically reduced using a pool of machine learning algorithms. This will ultimately save time and money in the laboratory. PMID:24180526

  12. Matrix and position correction of shuffler assays by application of the alternating conditional expectation algorithm to shuffler data

    SciTech Connect

    Pickrell, M M; Rinard, P M

    1992-01-01

    The {sup 252}Cf shuffler assays fissile uranium and plutonium using active neutron interrogation and then counting the induced delayed neutrons. Using the shuffler, we conducted over 1700 assays of 55-gal. drums with 28 different matrices and several different fissionable materials. We measured the drums to dispose the matrix and position effects on {sup 252}Cf shuffler assays. We used several neutron flux monitors during irradiation and kept statistics on the count rates of individual detector banks. The intent of these measurements was to gauge the effect of the matrix independently from the uranium assay. Although shufflers have previously been equipped neutron monitors, the functional relationship between the flux monitor sepals and the matrix-induced perturbation has been unknown. There are several flux monitors so the problem is multivariate, and the response is complicated. Conventional regression techniques cannot address complicated multivariate problems unless the underlying functional form and approximate parameter values are known in advance. Neither was available in this case. To address this problem, we used a new technique called alternating conditional expectations (ACE), which requires neither the functional relationship nor the initial parameters. The ACE algorithm develops the functional form and performs a numerical regression from only the empirical data. We applied the ACE algorithm to the shuffler-assay and flux-monitor data and developed an analytic function for the matrix correction. This function was optimized using conventional multivariate techniques. We were able to reduce the matrix-induced-bias error for homogeneous samples to 12.7%. The bias error for inhomogeneous samples was reduced to 13.5%. These results used only a few adjustable parameters compared to the number of available data points; the data were not over fit,'' but rather the results are general and robust.

  13. Effects of defect pixel correction algorithms for x-ray detectors on image quality in planar projection and volumetric CT data sets

    NASA Astrophysics Data System (ADS)

    Kuttig, Jan; Steiding, Christian; Hupfer, Martin; Karolczak, Marek; Kolditz, Daniel

    2015-09-01

    In this study we compared various defect pixel correction methods for reducing artifact appearance within projection images used for computed tomography (CT) reconstructions. Defect pixel correction algorithms were examined with respect to their artifact behaviour within planar projection images as well as in volumetric CT reconstructions. We investigated four algorithms: nearest neighbour, linear and adaptive linear interpolation, and a frequency-selective spectral-domain approach. To characterise the quality of each algorithm in planar image data, we inserted line defects of varying widths and orientations into images. The structure preservation of each algorithm was analysed by corrupting and correcting the image of a slit phantom pattern and by evaluating its line spread function (LSF). The noise preservation was assessed by interpolating corrupted flat images and estimating the noise power spectrum (NPS) of the interpolated region. For the volumetric investigations, we examined the structure and noise preservation within a structured aluminium foam, a mid-contrast cone-beam phantom and a homogeneous Polyurethane (PUR) cylinder. The frequency-selective algorithm showed the best structure and noise preservation for planar data of the correction methods tested. For volumetric data it still showed the best noise preservation, whereas the structure preservation was outperformed by the linear interpolation. The frequency-selective spectral-domain approach in the correction of line defects is recommended for planar image data, but its abilities within high-contrast volumes are restricted. In that case, the application of a simple linear interpolation might be the better choice to correct line defects within projection images used for CT.

  14. Successive Pattern Learning based on Test Feature Classifier and its application to Defect Image Classification

    NASA Astrophysics Data System (ADS)

    Sakata, Yukinobu; Kaneko, Shun'Ichi; Takagi, Yuji; Okuda, Hirohito

    A novel sequential learning algorithm of Test Feature Classifier (TFC) which is non-parametric and effective even for small data is proposed for efficiently handling consecutively provided training data. Fundamental characteristics of the sequential learning are examined. In the learning, after recognition of a set of unknown objects, they are fed into the classifier in order to obtain a modified classifier. We propose an efficient algorithm for reconstruction of prime tests, which are irreducible combinations of features which are capable to discriminate training patterns into correct classes, is formalized in cases of addition and removal of training patterns. Some strategies for the modification of training patterns are investigated with respect to their precision and performance by use of real pattern data. A real world problem of classification of defects on wafer images has been tackled by the proposed classifier, obtaining excellent performance even through efficient modification strategies.

  15. A two-dimensional, finite-element, flux-corrected transport algorithm for the solution of gas discharge problems

    NASA Astrophysics Data System (ADS)

    Georghiou, G. E.; Morrow, R.; Metaxas, A. C.

    2000-10-01

    An improved finite-element flux-corrected transport (FE-FCT) scheme, which was demonstrated in one dimension by the authors, is now extended to two dimensions and applied to gas discharge problems. The low-order positive ripple-free scheme, required to produce a FCT algorithm, is obtained by introducing diffusion to the high-order scheme (two-step Taylor-Galerkin). A self-adjusting variable diffusion coefficient is introduced, which reduces the high-order scheme to the equivalent of the upwind difference scheme, but without the complexities of an upwind scheme in a finite-element setting. Results are presented which show that the high-order scheme reduces to the equivalent of upwinding when the new diffusion coefficient is used. The proposed FCT scheme is shown to give similar results in comparison to a finite-difference time-split FCT code developed by Boris and Book. Finally, the new method is applied for the first time to a streamer propagation problem in its two-dimensional form.

  16. [A New HAC Unsupervised Classifier Based on Spectral Harmonic Analysis].

    PubMed

    Yang, Ke-ming; Wei, Hua-feng; Shi, Gang-qiang; Sun, Yang-yang; Liu, Fei

    2015-07-01

    Hyperspectral images classification is one of the important methods to identify image information, which has great significance for feature identification, dynamic monitoring and thematic information extraction, etc. Unsupervised classification without prior knowledge is widely used in hyperspectral image classification. This article proposes a new hyperspectral images unsupervised classification algorithm based on harmonic analysis(HA), which is called the harmonic analysis classifer (HAC). First, the HAC algorithm counts the first harmonic component and draws the histogram, so it can determine the initial feature categories and the pixel of cluster centers according to the number and location of the peak. Then, the algorithm is to map the waveform information of pixels to be classified spectrum into the feature space made up of harmonic decomposition times, amplitude and phase, and the similar features can be gotten together in the feature space, these pixels will be classified according to the principle of minimum distance. Finally, the algorithm computes the Euclidean distance of these pixels between cluster center, and merges the initial classification by setting the distance threshold. so the HAC can achieve the purpose of hyperspectral images classification. The paper collects spectral curves of two feature categories, and obtains harmonic decomposition times, amplitude and phase after harmonic analysis, the distribution of HA components in the feature space verified the correctness of the HAC. While the HAC algorithm is applied to EO-1 satellite Hyperion hyperspectral image and obtains the results of classification. Comparing with the hyperspectral image classifying results of K-MEANS, ISODATA and HAC classifiers, the HAC, as a unsupervised classification method, is confirmed to have better application on hyperspectral image classification. PMID:26717767

  17. A Portable Ground-Based Atmospheric Monitoring System (PGAMS) for the Calibration and Validation of Atmospheric Correction Algorithms Applied to Aircraft and Satellite Images

    NASA Technical Reports Server (NTRS)

    Schiller, Stephen; Luvall, Jeffrey C.; Rickman, Doug L.; Arnold, James E. (Technical Monitor)

    2000-01-01

    Detecting changes in the Earth's environment using satellite images of ocean and land surfaces must take into account atmospheric effects. As a result, major programs are underway to develop algorithms for image retrieval of atmospheric aerosol properties and atmospheric correction. However, because of the temporal and spatial variability of atmospheric transmittance it is very difficult to model atmospheric effects and implement models in an operational mode. For this reason, simultaneous in situ ground measurements of atmospheric optical properties are vital to the development of accurate atmospheric correction techniques. Presented in this paper is a spectroradiometer system that provides an optimized set of surface measurements for the calibration and validation of atmospheric correction algorithms. The Portable Ground-based Atmospheric Monitoring System (PGAMS) obtains a comprehensive series of in situ irradiance, radiance, and reflectance measurements for the calibration of atmospheric correction algorithms applied to multispectral. and hyperspectral images. The observations include: total downwelling irradiance, diffuse sky irradiance, direct solar irradiance, path radiance in the direction of the north celestial pole, path radiance in the direction of the overflying satellite, almucantar scans of path radiance, full sky radiance maps, and surface reflectance. Each of these parameters are recorded over a wavelength range from 350 to 1050 nm in 512 channels. The system is fast, with the potential to acquire the complete set of observations in only 8 to 10 minutes depending on the selected spatial resolution of the sky path radiance measurements

  18. Exact fan-beam and 4{pi}-acquisition cone-beam SPECT algorithms with uniform attenuation correction

    SciTech Connect

    Tang Qiulin; Zeng, Gengsheng L.; Wu Jiansheng; Gullberg, Grant T.

    2005-11-15

    This paper presents analytical fan-beam and cone-beam reconstruction algorithms that compensate for uniform attenuation in single photon emission computed tomography. First, a fan-beam algorithm is developed by obtaining a relationship between the two-dimensional (2D) Fourier transform of parallel-beam projections and fan-beam projections. Using this relationship, 2D Fourier transforms of equivalent parallel-beam projection data are obtained from the fan-beam projection data. Then a quasioptimal analytical reconstruction algorithm for uniformly attenuated Radon data, developed by Metz and Pan, is used to reconstruct the image. A cone-beam algorithm is developed by extending the fan-beam algorithm to 4{pi} solid angle geometry. The cone-beam algorithm is also an exact algorithm.

  19. A dose calculation algorithm with correction for proton-nucleus interactions in non-water materials for proton radiotherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Inaniwa, T.; Kanematsu, N.; Sato, S.; Kohno, R.

    2016-01-01

    In treatment planning for proton radiotherapy, the dose measured in water is applied to the patient dose calculation with density scaling by stopping power ratio {ρ\\text{S}} . Since the body tissues are chemically different from water, this approximation may cause dose calculation errors, especially due to differences in nuclear interactions. We proposed and validated an algorithm for correcting these errors. The dose in water is decomposed into three constituents according to the physical interactions of protons in water: the dose from primary protons continuously slowing down by electromagnetic interactions, the dose from protons scattered by elastic and/or inelastic interactions, and the dose resulting from nonelastic interactions. The proportions of the three dose constituents differ between body tissues and water. We determine correction factors for the proportion of dose constituents with Monte Carlo simulations in various standard body tissues, and formulated them as functions of their {ρ\\text{S}} for patient dose calculation. The influence of nuclear interactions on dose was assessed by comparing the Monte Carlo simulated dose and the uncorrected dose in common phantom materials. The influence around the Bragg peak amounted to  -6% for polytetrafluoroethylene and 0.3% for polyethylene. The validity of the correction method was confirmed by comparing the simulated and corrected doses in the materials. The deviation was below 0.8% for all materials. The accuracy of the correction factors derived with Monte Carlo simulations was separately verified through irradiation experiments with a 235 MeV proton beam using common phantom materials. The corrected doses agreed with the measurements within 0.4% for all materials except graphite. The influence on tumor dose was assessed in a prostate case. The dose reduction in the tumor was below 0.5%. Our results verify that this algorithm is practical and accurate for proton radiotherapy treatment planning, and

  20. A dose calculation algorithm with correction for proton-nucleus interactions in non-water materials for proton radiotherapy treatment planning.

    PubMed

    Inaniwa, T; Kanematsu, N; Sato, S; Kohno, R

    2016-01-01

    In treatment planning for proton radiotherapy, the dose measured in water is applied to the patient dose calculation with density scaling by stopping power ratio [Formula: see text]. Since the body tissues are chemically different from water, this approximation may cause dose calculation errors, especially due to differences in nuclear interactions. We proposed and validated an algorithm for correcting these errors. The dose in water is decomposed into three constituents according to the physical interactions of protons in water: the dose from primary protons continuously slowing down by electromagnetic interactions, the dose from protons scattered by elastic and/or inelastic interactions, and the dose resulting from nonelastic interactions. The proportions of the three dose constituents differ between body tissues and water. We determine correction factors for the proportion of dose constituents with Monte Carlo simulations in various standard body tissues, and formulated them as functions of their [Formula: see text] for patient dose calculation. The influence of nuclear interactions on dose was assessed by comparing the Monte Carlo simulated dose and the uncorrected dose in common phantom materials. The influence around the Bragg peak amounted to  -6% for polytetrafluoroethylene and 0.3% for polyethylene. The validity of the correction method was confirmed by comparing the simulated and corrected doses in the materials. The deviation was below 0.8% for all materials. The accuracy of the correction factors derived with Monte Carlo simulations was separately verified through irradiation experiments with a 235 MeV proton beam using common phantom materials. The corrected doses agreed with the measurements within 0.4% for all materials except graphite. The influence on tumor dose was assessed in a prostate case. The dose reduction in the tumor was below 0.5%. Our results verify that this algorithm is practical and accurate for proton radiotherapy treatment

  1. An investigation of motion correction algorithms for pediatric spinal cord DTI in healthy subjects and patients with spinal cord injury.

    PubMed

    Middleton, Devon M; Mohamed, Feroze B; Barakat, Nadia; Hunter, Louis N; Shellikeri, Sphoorti; Finsterbusch, Jürgen; Faro, Scott H; Shah, Pallav; Samdani, Amer F; Mulcahey, M J

    2014-06-01

    Patient and physiological motion can cause artifacts in DTI of the spinal cord which can impact image quality and diffusion indices. The purpose of this investigation was to determine a reliable motion correction method for pediatric spinal cord DTI and show effects of motion correction on DTI parameters in healthy subjects and patients with spinal cord injury. Ten healthy subjects and ten subjects with spinal cord injury were scanned using a 3T scanner. Images were acquired with an inner field-of-view DTI sequence covering cervical spine levels C1 to C7. Images were corrected for motion using two types of transformation (rigid and affine) and three cost functions. Corrected images and transformations were examined qualitatively and quantitatively using in-house developed code. Fractional anisotropy (FA) and mean diffusivity (MD) indices were calculated and tested for statistical significance pre- and post- motion correction. Images corrected using rigid methods showed improvements in image quality, while affine methods frequently showed residual distortions in corrected images. Blinded evaluation of pre and post correction images showed significant improvement in cord homogeneity and edge conspicuity in corrected images (p<0.0001). The average FA changes were statistically significant (p<0.0001) in the spinal cord injury group, while healthy subjects showed less FA change and were not significant. In both healthy subjects and subjects with spinal cord injury, quantitative and qualitative analysis showed the rigid scaled-least-squares registration technique to be the most reliable and effective in improving image quality. PMID:24629515

  2. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set—Effect of Pasteurization

    PubMed Central

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-01-01

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169

  3. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    PubMed

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-03-01

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169

  4. Noisy Hangul character recognition with fuzzy tree classifier

    NASA Astrophysics Data System (ADS)

    Lee, Seong-Whan

    1992-08-01

    Decision trees have been applied to solve a wide range of pattern recognition problems. In a tree classifier, a sequence of decision rules are used to assign an unknown sample to a pattern class. The main advantage of a decision tree over a single stage classifier is that the complex global decision making process can be divided into a number of simpler and local decisions at different levels of the tree. At each stage of the decision process, the feature subset best suited for that classification task can be selected. It can be shown that this approach provides better results than the use of the best feature subset for a single decision classifier. In addition, in large set problems where the number of classes is very large, the tree classifier can make a global decision much more quickly than the single stage classifier. However, a major weak point of a tree classifier is its error accumulation effect when the number of classes is very large. To overcome this difficulty, a fuzzy tree classifier with the following characteristics is implemented: (1) fuzzy logic search is used to find all `possible correct classes,'' and some similarity measures are used to determine the `most probable class;'' (2) global training is applied to generate extended terminals in order to enhance the recognition rate; (3) both the training and search algorithms have been given a lot of flexibility, to provide tradeoffs between error and rejection rates, and between the recognition rate and speed. Experimental results for the recognition of 520 most frequently used noisy Hangul character categories revealed a very high recognition rate of 99.8 percent and very high speed of 100 samples/sec, when the program was written in C and run on general purpose SUN4 SPARCstation

  5. Comparison of 3-D Multi-Lag Cross-Correlation and Speckle Brightness Aberration Correction Algorithms on Static and Moving Targets

    PubMed Central

    Ivancevich, Nikolas M.; Dahl, Jeremy J.; Smith, Stephen W.

    2010-01-01

    Phase correction has the potential to increase the image quality of 3-D ultrasound, especially transcranial ultrasound. We implemented and compared 2 algorithms for aberration correction, multi-lag cross-correlation and speckle brightness, using static and moving targets. We corrected three 75-ns rms electronic aberrators with full-width at half-maximum (FWHM) auto-correlation lengths of 1.35, 2.7, and 5.4 mm. Cross-correlation proved the better algorithm at 2.7 and 5.4 mm correlation lengths (P < 0.05). Static cross-correlation performed better than moving-target cross-correlation at the 2.7 mm correlation length (P < 0.05). Finally, we compared the static and moving-target cross-correlation on a flow phantom with a skull casting aberrator. Using signal from static targets, the correction resulted in an average contrast increase of 22.2%, compared with 13.2% using signal from moving targets. The contrast-to-noise ratio (CNR) increased by 20.5% and 12.8% using static and moving targets, respectively. Doppler signal strength increased by 5.6% and 4.9% for the static and moving-targets methods, respectively. PMID:19942503

  6. Parallelizable flood fill algorithm and corrective interface tracking approach applied to the simulation of multiple finite size bubbles merging with a free surface

    NASA Astrophysics Data System (ADS)

    Lafferty, Nathan; Badreddine, Hassan; Niceno, Bojan; Prasser, Horst-Michael

    2015-11-01

    A parallelizable flood fill algorithm is developed for identifying and tracking closed regions of fluids, dispersed phases, in CFD simulations of multiphase flows. It is used in conjunction with a newly developed method, corrective interface tracking, for simulating finite size dispersed bubbly flows in which the bubbles are too small relative to the grid to be simulated accurately with interface tracking techniques and too large relative to the grid for Lagrangian particle tracking techniques. The latter situation arising if local bubble induced turbulence is resolved, or modeled with LES. With corrective interface tracking the governing equations are solved on a static Eulerian grid. A correcting force, derived from empirical correlation based hydrodynamic forces, is applied to the bubble which is then advected using interface tracking techniques. This method results in accurate fluid-gas two-way coupling, bubble shapes, and terminal rise velocities. The flood fill algorithm and corrective interface tracking technique are applied to an air/water simulation of multiple bubbles rising and merging with a free surface. They are then validated against the same simulation performed using only interface tracking with a much finer grid.

  7. Quadrupole Alignment and Trajectory Correction for Future Linear Colliders: SLC Tests of a Dispersion-Free Steering Algorithm

    SciTech Connect

    Assmann, R

    2004-06-08

    The feasibility of future linear colliders depends on achieving very tight alignment and steering tolerances. All proposals (NLC, JLC, CLIC, TESLA and S-BAND) currently require a total emittance growth in the main linac of less than 30-100% [1]. This should be compared with a 100% emittance growth in the much smaller SLC linac [2]. Major advances in alignment and beam steering techniques beyond those used in the SLC are necessary for the next generation of linear colliders. In this paper, we present an experimental study of quadrupole alignment with a dispersion-free steering algorithm. A closely related method (wakefield-free steering) takes into account wakefield effects [3]. However, this method can not be studied at the SLC. The requirements for future linear colliders lead to new and unconventional ideas about alignment and beam steering. For example, no dipole correctors are foreseen for the standard trajectory correction in the NLC [4]; beam steering will be done by moving the quadrupole positions with magnet movers. This illustrates the close symbiosis between alignment, beam steering and beam dynamics that will emerge. It is no longer possible to consider the accelerator alignment as static with only a few surveys and realignments per year. The alignment in future linear colliders will be a dynamic process in which the whole linac, with thousands of beam-line elements, is aligned in a few hours or minutes, while the required accuracy of about 5 pm for the NLC quadrupole alignment [4] is a factor of 20 higher than in existing accelerators. The major task in alignment and steering is the accurate determination of the optimum beam-line position. Ideally one would like all elements to be aligned along a straight line. However, this is not practical. Instead a ''smooth curve'' is acceptable as long as its wavelength is much longer than the betatron wavelength of the accelerated beam. Conventional alignment methods are limited in accuracy by errors in the survey

  8. Bilayer segmentation of webcam videos using tree-based classifiers.

    PubMed

    Yin, Pei; Criminisi, Antonio; Winn, John; Essa, Irfan

    2011-01-01

    This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as "motons," inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems. PMID:21088317

  9. Correction of Faulty Sensors in Phased Array Radars Using Symmetrical Sensor Failure Technique and Cultural Algorithm with Differential Evolution

    PubMed Central

    Khan, S. U.; Qureshi, I. M.; Zaman, F.; Shoaib, B.; Naveed, A.; Basit, A.

    2014-01-01

    Three issues regarding sensor failure at any position in the antenna array are discussed. We assume that sensor position is known. The issues include raise in sidelobe levels, displacement of nulls from their original positions, and diminishing of null depth. The required null depth is achieved by making the weight of symmetrical complement sensor passive. A hybrid method based on memetic computing algorithm is proposed. The hybrid method combines the cultural algorithm with differential evolution (CADE) which is used for the reduction of sidelobe levels and placement of nulls at their original positions. Fitness function is used to minimize the error between the desired and estimated beam patterns along with null constraints. Simulation results for various scenarios have been given to exhibit the validity and performance of the proposed algorithm. PMID:24688440

  10. Practical Atmospheric Correction Algorithms for a Multi-Spectral Sensor From the Visible Through the Thermal Spectral Regions

    SciTech Connect

    Borel, C.C.; Villeneuve, P.V.; Clodium, W.B.; Szymenski, J.J.; Davis, A.B.

    1999-04-04

    Deriving information about the Earth's surface requires atmospheric corrections of the measured top-of-the-atmosphere radiances. One possible path is to use atmospheric radiative transfer codes to predict how the radiance leaving the ground is affected by the scattering and attenuation. In practice the atmosphere is usually not well known and thus it is necessary to use more practical methods. The authors will describe how to find dark surfaces, estimate the atmospheric optical depth, estimate path radiance and identify thick clouds using thresholds on reflectance and NDVI and columnar water vapor. The authors describe a simple method to correct a visible channel contaminated by a thin cirrus clouds.

  11. Converting local spectral and spatial information from a priori classifiers into contextual knowledge for impervious surface classification

    NASA Astrophysics Data System (ADS)

    Luo, Li; Mountrakis, Giorgos

    2011-09-01

    A classification model was demonstrated that explored spectral and spatial contextual information from previously classified neighbors to improve classification of remaining unclassified pixels. The classification was composed by two major steps, the a priori and the a posteriori classifications. The a priori algorithm classified the less difficult image portion. The a posteriori classifier operated on the more challenging image parts and strived to enhance accuracy by converting classified information from the a priori process into specific knowledge. The novelty of this work relies on the substitution of image-wide information with local spectral representations and spatial correlations, in essence classifying each pixel using exclusively neighboring behavior. Furthermore, the a posteriori classifier is a simple and intuitive algorithm, adjusted to perform in a localized setting for the task requirements. A 2001 and a 2006 Landsat scene from Central New York were used to assess the performance on an impervious classification task. The proposed method was compared with a back propagation neural network. Kappa statistic values in the corresponding applicable datasets increased from 18.67 to 24.05 for the 2006 scene, and from 22.92 to 35.76 for the 2001 scene classification, mostly correcting misclassifications between impervious and soil pixels. This finding suggests that simple classifiers have the ability to surpass complex classifiers through incorporation of partial results and an elegant multi-process framework.

  12. Analysis of vegetation by the application of a physically-based atmospheric correction algorithm to OLI data: a case study of Leonessa Municipality, Italy

    NASA Astrophysics Data System (ADS)

    Mei, Alessandro; Manzo, Ciro; Petracchini, Francesco; Bassani, Cristiana

    2016-04-01

    Remote sensing techniques allow to estimate vegetation parameters related to large areas for forest health evaluation and biomass estimation. Moreover, the parametrization of specific indices such as Normalized Difference Vegetation Index (NDVI) allows to study biogeochemical cycles and radiative energy transfer processes between soil/vegetation and atmosphere. This paper focuses on the evaluation of vegetation cover analysis in Leonessa Municipality, Latium Region (Italy) by the use of 2015 Landsat 8 applying the OLI@CRI (OLI ATmospherically Corrected Reflectance Imagery) algorithm developed following the procedure described in Bassani et al. 2015. The OLI@CRI is based on 6SV radiative transfer model (Kotchenova et al., 2006) ables to simulate the radiative field in the atmosphere-earth coupled system. NDVI was derived from the OLI corrected image. This index, widely used for biomass estimation and vegetation analysis cover, considers the sensor channels falling in the near infrared and red spectral regions which are sensitive to chlorophyll absorption and cell structure. The retrieved product was then spatially resampled at MODIS image resolution and then validated by the NDVI of MODIS considered as reference. The physically-based OLI@CRI algorithm also provides the incident solar radiation at ground at the acquisition time by 6SV simulation. Thus, the OLI@CRI algorithm completes the remote sensing dataset required for a comprehensive analysis of the sub-regional biomass production by using data of the new generation remote sensing sensor and an atmospheric radiative transfer model. If the OLI@CRI algorithm is applied to a temporal series of OLI data, the influence of the solar radiation on the above-ground vegetation can be analysed as well as vegetation index variation.

  13. An algorithm for estimation and correction of anisotropic magnification distortion of cryo-EM images without need of pre-calibration.

    PubMed

    Yu, Guimei; Li, Kunpeng; Liu, Yue; Chen, Zhenguo; Wang, Zhiqing; Yan, Rui; Klose, Thomas; Tang, Liang; Jiang, Wen

    2016-08-01

    Anisotropic magnification distortion of TEM images (mainly the elliptic distortion) has been recently found as a potential resolution-limiting factor in single particle 3-D reconstruction. Elliptic distortions of ∼1-3% have been reported for multiple microscopes under low magnification settings (e.g., 18,000×), which significantly limited the achievable resolution of single particle 3-D reconstruction, especially for large particles. Here we report a generic algorithm that formulates the distortion correction problem as a generalized 2-D alignment task and estimates the distortion parameters directly from the particle images. Unlike the present pre-calibration methods, our computational method is applicable to all datasets collected at a broad range of magnifications using any microscope without need of additional experimental measurements. Moreover, the per-micrograph and/or per-particle level elliptic distortion estimation in our method could resolve potential distortion variations within a cryo-EM dataset, and further improve the 3-D reconstructions relative to constant-value correction by the pre-calibration methods. With successful applications to multiple datasets and cross-validation with the pre-calibration method, we have demonstrated the validity and robustness of our algorithm in estimating the distortion; correction of the elliptic distortion significantly improved the achievable resolutions by ∼1-3 folds and enabled 3-D reconstructions of multiple viral structures at 2.4-2.6Å resolutions. The resolution limits with elliptic distortion and the amounts of resolution improvements with distortion correction were found to strongly correlate with the product of the particle size and the amount of distortion, which can help assess if elliptic distortion is a major resolution limiting factor for single particle cryo-EM projects. PMID:27270241

  14. ADMIRE: a locally adaptive single-image, non-uniformity correction and denoising algorithm: application to uncooled IR camera

    NASA Astrophysics Data System (ADS)

    Tendero, Y.; Gilles, J.

    2012-06-01

    We propose a new way to correct for the non-uniformity (NU) and the noise in uncooled infrared-type images. This method works on static images, needs no registration, no camera motion and no model for the non uniformity. The proposed method uses an hybrid scheme including an automatic locally-adaptive contrast adjustment and a state-of-the-art image denoising method. It permits to correct for a fully non-linear NU and the noise efficiently using only one image. We compared it with total variation on real raw and simulated NU infrared images. The strength of this approach lies in its simplicity, low computational cost. It needs no test-pattern or calibration and produces no "ghost-artefact".

  15. On the linearity of the SWP camera of the international ultraviolet explorer /IUE/ - A correction algorithm. [for Short Wavelength Prime low resolution spectral images

    NASA Technical Reports Server (NTRS)

    Holm, A.; Schiffer, F. H.; Bohlin, R. C.; Cassatella, A.; Ponz, D. P.

    1982-01-01

    An algorithm is presented for correcting IUE low resolution spectral images obtained with the SWP camera for some of the non-linearity effects reported by Bohlin et al. (1980). The non-linearity problem, which affects SWP images processed at Goddard Space Flight Center in the period May 22, 1978 to July 7, 1979 and at VILSPA in the period June 14, 1978 to August 6, 1979, was essentially due to the use of an Intensity Transfer Function (ITF) that erroneously included a blank image in the 20 percent exposure level. The correction algorithm described here was adopted by the three IUE Agencies in November 1979 as being suitable for most IUE users. It has the advantages to be applicable to any kind of low resolution SWP spectra, to introduce errors which are usually less than the intrinsic photometric errors, and to be of simple application. The results obtained by applying the method to a representative set of spectra of both point and extended sources are reported. In addition, a new evaluation of linearity and reproducibility of the SWP spectral data is provided, based on the improved ITF.

  16. Performance evaluation of blind steganalysis classifiers

    NASA Astrophysics Data System (ADS)

    Hogan, Mark T.; Silvestre, Guenole C. M.; Hurley, Neil J.

    2004-06-01

    Steganalysis is the art of detecting and/or decoding secret messages embedded in multimedia contents. The topic has received considerable attention in recent years due to the malicious use of multimedia documents for covert communication. Steganalysis algorithms can be classified as either blind or non-blind depending on whether or not the method assumes knowledge of the embedding algorithm. In general, blind methods involve the extraction of a feature vector that is sensitive to embedding and is subsequently used to train a classifier. This classifier can then be used to determine the presence of a stego-object, subject to an acceptable probability of false alarm. In this work, the performance of three classifiers, namely Fisher linear discriminant (FLD), neural network (NN) and support vector machines (SVM), is compared using a recently proposed feature extraction technique. It is shown that the NN and SVM classifiers exhibit similar performance exceeding that of the FLD. However, steganographers may be able to circumvent such steganalysis algorithms by preserving the statistical transparency of the feature vector at the embedding. This motivates the use of classification algorithms based on the entire document. Such a strategy is applied using SVM classification for DCT, FFT and DWT representations of an image. The performance is compared to a feature extraction technique.

  17. Improvement of Image Quality and Diagnostic Performance by an Innovative Motion-Correction Algorithm for Prospectively ECG Triggered Coronary CT Angiography

    PubMed Central

    Lu, Bin; Yan, Hong-Bing; Mu, Chao-Wei; Gao, Yang; Hou, Zhi-Hui; Wang, Zhi-Qiang; Liu, Kun; Parinella, Ashley H.; Leipsic, Jonathon A.

    2015-01-01

    Objective To investigate the effect of a novel motion-correction algorithm (Snap-short Freeze, SSF) on image quality and diagnostic accuracy in patients undergoing prospectively ECG-triggered CCTA without administering rate-lowering medications. Materials and Methods Forty-six consecutive patients suspected of CAD prospectively underwent CCTA using prospective ECG-triggering without rate control and invasive coronary angiography (ICA). Image quality, interpretability, and diagnostic performance of SSF were compared with conventional multisegment reconstruction without SSF, using ICA as the reference standard. Results All subjects (35 men, 57.6 ± 8.9 years) successfully underwent ICA and CCTA. Mean heart rate was 68.8±8.4 (range: 50–88 beats/min) beats/min without rate controlling medications during CT scanning. Overall median image quality score (graded 1–4) was significantly increased from 3.0 to 4.0 by the new algorithm in comparison to conventional reconstruction. Overall interpretability was significantly improved, with a significant reduction in the number of non-diagnostic segments (690 of 694, 99.4% vs 659 of 694, 94.9%; P<0.001). However, only the right coronary artery (RCA) showed a statistically significant difference (45 of 46, 97.8% vs 35 of 46, 76.1%; P = 0.004) on a per-vessel basis in this regard. Diagnostic accuracy for detecting ≥50% stenosis was improved using the motion-correction algorithm on per-vessel [96.2% (177/184) vs 87.0% (160/184); P = 0.002] and per-segment [96.1% (667/694) vs 86.6% (601/694); P <0.001] levels, but there was not a statistically significant improvement on a per-patient level [97.8 (45/46) vs 89.1 (41/46); P = 0.203]. By artery analysis, diagnostic accuracy was improved only for the RCA [97.8% (45/46) vs 78.3% (36/46); P = 0.007]. Conclusion The intracycle motion correction algorithm significantly improved image quality and diagnostic interpretability in patients undergoing CCTA with prospective ECG triggering and

  18. Classifying Southern Hemisphere extratropical cyclones

    NASA Astrophysics Data System (ADS)

    Catto, Jennifer

    2015-04-01

    There is a wide variety of flavours of extratropical cyclones in the Southern Hemisphere, with differing structures and lifecycles. Previous studies have classified these manually using upper level flow features or satellite data. In order to be able to evaluate climate models and understand how extratropical cyclones might change in the future, we need to be able to use an automated method to classify cyclones. Extratropical cyclones have been identified in the Southern Hemisphere from the ERA-Interim reanalysis dataset with a commonly used identification and tracking algorithm that employs 850hPa relative vorticity. A clustering method applied to large-scale fields from ERA-Interim at the time of cyclone genesis (when the cyclone is first identified), has been used to objectively classify these cyclones in the Southern Hemisphere. This simple method is able to separate the cyclones into classes with quite different development mechanisms and lifecycle characteristics. Some of the classes seem to coincide with previous manual classifications on shorter timescales, showing their utility for climate model evaluation and climate change studies.

  19. Dimensionality Reduction Through Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Tumer, Kagan; Norwig, Peter (Technical Monitor)

    1999-01-01

    In data mining, one often needs to analyze datasets with a very large number of attributes. Performing machine learning directly on such data sets is often impractical because of extensive run times, excessive complexity of the fitted model (often leading to overfitting), and the well-known "curse of dimensionality." In practice, to avoid such problems, feature selection and/or extraction are often used to reduce data dimensionality prior to the learning step. However, existing feature selection/extraction algorithms either evaluate features by their effectiveness across the entire data set or simply disregard class information altogether (e.g., principal component analysis). Furthermore, feature extraction algorithms such as principal components analysis create new features that are often meaningless to human users. In this article, we present input decimation, a method that provides "feature subsets" that are selected for their ability to discriminate among the classes. These features are subsequently used in ensembles of classifiers, yielding results superior to single classifiers, ensembles that use the full set of features, and ensembles based on principal component analysis on both real and synthetic datasets.

  20. Integrating heterogeneous classifier ensembles for EMG signal decomposition based on classifier agreement.

    PubMed

    Rasheed, Sarbast; Stashuk, Daniel W; Kamel, Mohamed S

    2010-05-01

    In this paper, we present a design methodology for integrating heterogeneous classifier ensembles by employing a diversity-based hybrid classifier fusion approach, whose aggregator module consists of two classifier combiners, to achieve an improved classification performance for motor unit potential classification during electromyographic (EMG) signal decomposition. Following the so-called overproduce and choose strategy to classifier ensemble combination, the developed system allows the construction of a large set of base classifiers, and then automatically chooses subsets of classifiers to form candidate classifier ensembles for each combiner. The system exploits kappa statistic diversity measure to design classifier teams through estimating the level of agreement between base classifier outputs. The pool of base classifiers consists of different kinds of classifiers: the adaptive certainty-based, the adaptive fuzzy k -NN, and the adaptive matched template filter classifiers; and utilizes different types of features. Performance of the developed system was evaluated using real and simulated EMG signals, and was compared with the performance of the constituent base classifiers. Across the EMG signal datasets used, the developed system had better average classification performance overall, especially in terms of reducing classification errors. For simulated signals of varying intensity, the developed system had an average correct classification rate CCr of 93.8% and an error rate Er of 2.2% compared to 93.6% and 3.2%, respectively, for the best base classifier in the ensemble. For simulated signals with varying amounts of shape and/or firing pattern variability, the developed system had a CCr of 89.1% with an Er of 4.7% compared to 86.3% and 5.6%, respectively, for the best classifier. For real signals, the developed system had a CCr of 89.4% with an Er of 3.9% compared to 84.6% and 7.1%, respectively, for the best classifier. PMID:19171524

  1. Dynamic system classifier

    NASA Astrophysics Data System (ADS)

    Pumpe, Daniel; Greiner, Maksim; Müller, Ewald; Enßlin, Torsten A.

    2016-07-01

    Stochastic differential equations describe well many physical, biological, and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time-dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of the DSC to oscillation processes with a time-dependent frequency ω (t ) and damping factor γ (t ) . Although real systems might be more complex, this simple oscillator captures many characteristic features. The ω and γ time lines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiments show that such classifiers perform well even in the low signal-to-noise regime.

  2. A generalised background correction algorithm for a Halo Doppler lidar and its application to data from Finland

    DOE PAGESBeta

    Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville; Petäjä, Tuukka

    2016-03-03

    Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Anymore » bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. Lastly, we show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. The reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.« less

  3. A generalised background correction algorithm for a Halo Doppler lidar and its application to data from Finland

    NASA Astrophysics Data System (ADS)

    Manninen, Antti J.; O'Connor, Ewan J.; Vakkari, Ville; Petäjä, Tuukka

    2016-03-01

    Current commercially available Doppler lidars provide an economical and robust solution for measuring vertical and horizontal wind velocities, together with the ability to provide co- and cross-polarised backscatter profiles. The high temporal resolution of these instruments allows turbulent properties to be obtained from studying the variation in radial velocities. However, the instrument specifications mean that certain characteristics, especially the background noise behaviour, become a limiting factor for the instrument sensitivity in regions where the aerosol load is low. Turbulent calculations require an accurate estimate of the contribution from velocity uncertainty estimates, which are directly related to the signal-to-noise ratio. Any bias in the signal-to-noise ratio will propagate through as a bias in turbulent properties. In this paper we present a method to correct for artefacts in the background noise behaviour of commercially available Doppler lidars and reduce the signal-to-noise ratio threshold used to discriminate between noise, and cloud or aerosol signals. We show that, for Doppler lidars operating continuously at a number of locations in Finland, the data availability can be increased by as much as 50 % after performing this background correction and subsequent reduction in the threshold. The reduction in bias also greatly improves subsequent calculations of turbulent properties in weak signal regimes.

  4. Classifying threats with a 14-MeV neutron interrogation system.

    PubMed

    Strellis, Dan; Gozani, Tsahi

    2005-01-01

    SeaPODDS (Sea Portable Drug Detection System) is a non-intrusive tool for detecting concealed threats in hidden compartments of maritime vessels. This system consists of an electronic neutron generator, a gamma-ray detector, a data acquisition computer, and a laptop computer user-interface. Although initially developed to detect narcotics, recent algorithm developments have shown that the system is capable of correctly classifying a threat into one of four distinct categories: narcotic, explosive, chemical weapon, or radiological dispersion device (RDD). Detection of narcotics, explosives, and chemical weapons is based on gamma-ray signatures unique to the chemical elements. Elements are identified by their characteristic prompt gamma-rays induced by fast and thermal neutrons. Detection of RDD is accomplished by detecting gamma-rays emitted by common radioisotopes and nuclear reactor fission products. The algorithm phenomenology for classifying threats into the proper categories is presented here. PMID:15985373

  5. Algorithm for x-ray beam hardening and scatter correction in low-dose cone-beam CT: phantom studies

    NASA Astrophysics Data System (ADS)

    Liu, Wenlei; Rong, Junyan; Gao, Peng; Liao, Qimei; Lu, HongBing

    2016-03-01

    X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), as well as beam hardening, resulting in image artifacts, contrast reduction, and lack of CT number accuracy. Meanwhile the x-ray radiation dose is also non-ignorable. Considerable scatter or beam hardening correction methods have been developed, independently, and rarely combined with low-dose CT reconstruction. In this paper, we combine scatter suppression with beam hardening correction for sparse-view CT reconstruction to improve CT image quality and reduce CT radiation. Firstly, scatter was measured, estimated, and removed using measurement-based methods, assuming that signal in the lead blocker shadow is only attributable to x-ray scatter. Secondly, beam hardening was modeled by estimating an equivalent attenuation coefficient at the effective energy, which was integrated into the forward projector of the algebraic reconstruction technique (ART). Finally, the compressed sensing (CS) iterative reconstruction is carried out for sparse-view CT reconstruction to reduce the CT radiation. Preliminary Monte Carlo simulated experiments indicate that with only about 25% of conventional dose, our method reduces the magnitude of cupping artifact by a factor of 6.1, increases the contrast by a factor of 1.4 and the CNR by a factor of 15. The proposed method could provide good reconstructed image from a few view projections, with effective suppression of artifacts caused by scatter and beam hardening, as well as reducing the radiation dose. With this proposed framework and modeling, it may provide a new way for low-dose CT imaging.

  6. SU-E-I-05: A Correction Algorithm for Kilovoltage Cone-Beam Computed Tomography Dose Calculations in Cervical Cancer Patients

    SciTech Connect

    Zhang, J; Zhang, W; Lu, J

    2015-06-15

    Purpose: To investigate the accuracy and feasibility of dose calculations using kilovoltage cone beam computed tomography in cervical cancer radiotherapy using a correction algorithm. Methods: The Hounsfield units (HU) and electron density (HU-density) curve was obtained for both planning CT (pCT) and kilovoltage cone beam CT (CBCT) using a CIRS-062 calibration phantom. The pCT and kV-CBCT images have different HU values, and if the HU-density curve of CBCT was directly used to calculate dose in CBCT images may have a deviation on dose distribution. It is necessary to normalize the different HU values between pCT and CBCT. A HU correction algorithm was used for CBCT images (cCBCT). Fifteen intensity-modulated radiation therapy (IMRT) plans of cervical cancer were chosen, and the plans were transferred to the pCT and cCBCT data sets without any changes for dose calculations. Phantom and patient studies were carried out. The dose differences and dose distributions were compared between cCBCT plan and pCT plan. Results: The HU number of CBCT was measured by several times, and the maximum change was less than 2%. To compare with pCT, the CBCT and cCBCT has a discrepancy, the dose differences in CBCT and cCBCT images were 2.48%±0.65% (range: 1.3%∼3.8%) and 0.48%±0.21% (range: 0.1%∼0.82%) for phantom study, respectively. For dose calculation in patient images, the dose differences were 2.25%±0.43% (range: 1.4%∼3.4%) and 0.63%±0.35% (range: 0.13%∼0.97%), respectively. And for the dose distributions, the passing rate of cCBCT was higher than the CBCTs. Conclusion: The CBCT image for dose calculation is feasible in cervical cancer radiotherapy, and the correction algorithm offers acceptable accuracy. It will become a useful tool for adaptive radiation therapy.

  7. A fuzzy classifier system for process control

    NASA Technical Reports Server (NTRS)

    Karr, C. L.; Phillips, J. C.

    1994-01-01

    A fuzzy classifier system that discovers rules for controlling a mathematical model of a pH titration system was developed by researchers at the U.S. Bureau of Mines (USBM). Fuzzy classifier systems successfully combine the strengths of learning classifier systems and fuzzy logic controllers. Learning classifier systems resemble familiar production rule-based systems, but they represent their IF-THEN rules by strings of characters rather than in the traditional linguistic terms. Fuzzy logic is a tool that allows for the incorporation of abstract concepts into rule based-systems, thereby allowing the rules to resemble the familiar 'rules-of-thumb' commonly used by humans when solving difficult process control and reasoning problems. Like learning classifier systems, fuzzy classifier systems employ a genetic algorithm to explore and sample new rules for manipulating the problem environment. Like fuzzy logic controllers, fuzzy classifier systems encapsulate knowledge in the form of production rules. The results presented in this paper demonstrate the ability of fuzzy classifier systems to generate a fuzzy logic-based process control system.

  8. A novel semi-supervised hyperspectral image classification approach based on spatial neighborhood information and classifier combination

    NASA Astrophysics Data System (ADS)

    Tan, Kun; Hu, Jun; Li, Jun; Du, Peijun

    2015-07-01

    In the process of semi-supervised hyperspectral image classification, spatial neighborhood information of training samples is widely applied to solve the small sample size problem. However, the neighborhood information of unlabeled samples is usually ignored. In this paper, we propose a new algorithm for hyperspectral image semi-supervised classification in which the spatial neighborhood information is combined with classifier to enhance the classification ability in determining the class label of the selected unlabeled samples. There are two key points in this algorithm: (1) it is considered that the correct label should appear in the spatial neighborhood of unlabeled samples; (2) the combination of classifier can obtains better results. Two classifiers multinomial logistic regression (MLR) and k-nearest neighbor (KNN) are combined together in the above way to further improve the performance. The performance of the proposed approach was assessed with two real hyperspectral data sets, and the obtained results indicate that the proposed approach is effective for hyperspectral classification.

  9. Recognition Using Hybrid Classifiers.

    PubMed

    Osadchy, Margarita; Keren, Daniel; Raviv, Dolev

    2016-04-01

    A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply. PMID:26959677

  10. A novel fuzzy logic correctional algorithm for traction control systems on uneven low-friction road conditions

    NASA Astrophysics Data System (ADS)

    Li, Liang; Ran, Xu; Wu, Kaihui; Song, Jian; Han, Zongqi

    2015-06-01

    The traction control system (TCS) might prevent excessive skid of the driving wheels so as to enhance the driving performance and direction stability of the vehicle. But if driven on an uneven low-friction road, the vehicle body often vibrates severely due to the drastic fluctuations of driving wheels, and then the vehicle comfort might be reduced greatly. The vibrations could be hardly removed with traditional drive-slip control logic of the TCS. In this paper, a novel fuzzy logic controller has been brought forward, in which the vibration signals of the driving wheels are adopted as new controlled variables, and then the engine torque and the active brake pressure might be coordinately re-adjusted besides the basic logic of a traditional TCS. In the proposed controller, an adjustable engine torque and pressure compensation loop are adopted to constrain the drastic vehicle vibration. Thus, the wheel driving slips and the vibration degrees might be adjusted synchronously and effectively. The simulation results and the real vehicle tests validated that the proposed algorithm is effective and adaptable for a complicated uneven low-friction road.

  11. The Algorithm Theoretical Basis Document for the Atmospheric Delay Correction to GLAS Laser Altimeter Ranges. Volume 8

    NASA Technical Reports Server (NTRS)

    Herring, Thomas A.; Quinn, Katherine J.

    2012-01-01

    NASA s Ice, Cloud, and Land Elevation Satellite (ICESat) mission will be launched late 2001. It s primary instrument is the Geoscience Laser Altimeter System (GLAS) instrument. The main purpose of this instrument is to measure elevation changes of the Greenland and Antarctic icesheets. To accurately measure the ranges it is necessary to correct for the atmospheric delay of the laser pulses. The atmospheric delay depends on the integral of the refractive index along the path that the laser pulse travels through the atmosphere. The refractive index of air at optical wavelengths is a function of density and molecular composition. For ray paths near zenith and closed form equations for the refractivity, the atmospheric delay can be shown to be directly related to surface pressure and total column precipitable water vapor. For ray paths off zenith a mapping function relates the delay to the zenith delay. The closed form equations for refractivity recommended by the International Union of Geodesy and Geophysics (IUGG) are optimized for ground based geodesy techniques and in the next section we will consider whether these equations are suitable for satellite laser altimetry.

  12. A simulation based approach to optimize inventory replenishment with RAND algorithm: An extended study of corrected demand using Holt's method for textile industry

    NASA Astrophysics Data System (ADS)

    Morshed, Mohammad Sarwar; Kamal, Mostafa Mashnoon; Khan, Somaiya Islam

    2016-07-01

    Inventory has been a major concern in supply chain and numerous researches have been done lately on inventory control which brought forth a number of methods that efficiently manage inventory and related overheads by reducing cost of replenishment. This research is aimed towards providing a better replenishment policy in case of multi-product, single supplier situations for chemical raw materials of textile industries in Bangladesh. It is assumed that industries currently pursue individual replenishment system. The purpose is to find out the optimum ideal cycle time and individual replenishment cycle time of each product for replenishment that will cause lowest annual holding and ordering cost, and also find the optimum ordering quantity. In this paper indirect grouping strategy has been used. It is suggested that indirect grouping Strategy outperforms direct grouping strategy when major cost is high. An algorithm by Kaspi and Rosenblatt (1991) called RAND is exercised for its simplicity and ease of application. RAND provides an ideal cycle time (T) for replenishment and integer multiplier (ki) for individual items. Thus the replenishment cycle time for each product is found as T×ki. Firstly, based on data, a comparison between currently prevailing (individual) process and RAND is provided that uses the actual demands which presents 49% improvement in total cost of replenishment. Secondly, discrepancies in demand is corrected by using Holt's method. However, demands can only be forecasted one or two months into the future because of the demand pattern of the industry under consideration. Evidently, application of RAND with corrected demand display even greater improvement. The results of this study demonstrates that cost of replenishment can be significantly reduced by applying RAND algorithm and exponential smoothing models.

  13. An automated approach to the design of decision tree classifiers

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Chin, R.; Beaudet, P.

    1982-01-01

    An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.

  14. Classifying Adolescent Perfectionists

    ERIC Educational Resources Information Center

    Rice, Kenneth G.; Ashby, Jeffrey S.; Gilman, Rich

    2011-01-01

    A large school-based sample of 9th-grade adolescents (N = 875) completed the Almost Perfect Scale-Revised (APS-R; Slaney, Mobley, Trippi, Ashby, & Johnson, 1996). Decision rules and cut-scores were developed and replicated that classify adolescents as one of two kinds of perfectionists (adaptive or maladaptive) or as nonperfectionists. A…

  15. Number in Classifier Languages

    ERIC Educational Resources Information Center

    Nomoto, Hiroki

    2013-01-01

    Classifier languages are often described as lacking genuine number morphology and treating all common nouns, including those conceptually count, as an unindividuated mass. This study argues that neither of these popular assumptions is true, and presents new generalizations and analyses gained by abandoning them. I claim that no difference exists…

  16. Classifying Cereal Data

    Cancer.gov

    The DSQ includes questions about cereal intake and allows respondents up to two responses on which cereals they consume. We classified each cereal reported first by hot or cold, and then along four dimensions: density of added sugars, whole grains, fiber, and calcium.

  17. Cascaded classifier for large-scale data applied to automatic segmentation of articular cartilage

    NASA Astrophysics Data System (ADS)

    Prasoon, Adhish; Igel, Christian; Loog, Marco; Lauze, François; Dam, Erik; Nielsen, Mads

    2012-02-01

    Many classification/segmentation tasks in medical imaging are particularly challenging for machine learning algorithms because of the huge amount of training data required to cover biological variability. Learning methods scaling badly in the number of training data points may not be applicable. This may exclude powerful classifiers with good generalization performance such as standard non-linear support vector machines (SVMs). Further, many medical imaging problems have highly imbalanced class populations, because the object to be segmented has only few pixels/voxels compared to the background. This article presents a two-stage classifier for large-scale medical imaging problems. In the first stage, a classifier that is easily trainable on large data sets is employed. The class imbalance is exploited and the classifier is adjusted to correctly detect background with a very high accuracy. Only the comparatively few data points not identified as background are passed to the second stage. Here a powerful classifier with high training time complexity can be employed for making the final decision whether a data point belongs to the object or not. We applied our method to the problem of automatically segmenting tibial articular cartilage from knee MRI scans. We show that by using nearest neighbor (kNN) in the first stage we can reduce the amount of data for training a non-linear SVM in the second stage. The cascaded system achieves better results than the state-of-the-art method relying on a single kNN classifier.

  18. Crystal and molecular structures of selected organic and organometallic compounds and an algorithm for empirical absorption correction

    SciTech Connect

    Karcher, B.

    1981-10-01

    Cr(CO)/sub 5/(SCMe/sub 2/) crystallizes in the monoclinic space group P2/sub 1//a with a = 10.468(8), b = 11.879(5), c = 9.575(6) A, and ..beta.. = 108.14(9)/sup 0/, with an octahedral coordination around the chromium atom. PSN/sub 3/C/sub 6/H/sub 12/ crystallizes in the monoclinic space group P2/sub 1//n with a = 10.896(1), b = 11.443(1), c = 7.288(1) A, and ..beta.. = 104.45(1)/sup 0/. Each of the five-membered rings in this structure contains a carbon atom which is puckered toward the sulfur and out of the nearly planar arrays of the remaining ring atoms. (RhO/sub 4/N/sub 4/C/sub 48/H/sub 56/)/sup +/(BC/sub 24/H/sub 20/)/sup -/.1.5NC/sub 2/H/sub 3/ crystallizes in the triclinic space group P1 with a = 17.355(8), b = 21.135(10), c = 10.757(5) A, ..cap alpha.. = 101.29(5), ..beta.. = 98.36(5), and ..gamma.. = 113.92(4)/sup 0/. Each Rh cation complex is a monomer. MoP/sub 2/O/sub 10/C/sub 16/H/sub 22/ crystallizes in the monoclinic space group P2/sub 1//c with a = 12.220(3), b = 9.963(2), c = 20.150(6) A, and ..beta.. = 103.01(3)/sup 0/. The molybdenum atom occupies the axial position of the six-membered ring of each of the two phosphorinane ligands. An empirical absorption correction program was written.

  19. The Digital Correction Unit: A data correction/compaction chip

    SciTech Connect

    MacKenzie, S.; Nielsen, B.; Paffrath, L.; Russell, J.; Sherden, D.

    1986-10-01

    The Digital Correction Unit (DCU) is a semi-custom CMOS integrated circuit which corrects and compacts data for the SLD experiment. It performs a piece-wise linear correction to data, and implements two separate compaction algorithms. This paper describes the basic functionality of the DCU and its correction and compaction algorithms.

  20. New results in semi-supervised learning using adaptive classifier fusion

    NASA Astrophysics Data System (ADS)

    Lynch, Robert; Willett, Peter

    2014-05-01

    In typical classification problems the data used to train a model for each class is often correctly labeled, and so that fully supervised learning can be utilized. For example, many illustrative labeled data sets can be found at sources such as the UCI Repository for Machine Learning (http://archive.ics.uci.edu/ml/), or at the Keel Data Set Repository (http://www.keel.es). However, increasingly many real world classification problems involve data that contain both labeled and unlabeled samples. In the latter case, the data samples are assumed to be missing all class label information, and when used as training data these samples are considered to be of unknown origin (i.e., to the learning system, actual class membership is completely unknown). Typically, when presented with a classification problem containing both labeled and unlabeled training samples, a technique that is often used is to throw out the unlabeled data. In other words, the unlabeled data are not included with existing labeled data for learning, and which can result in a poorly trained classifier that does not reach its full performance potential. In most cases, the primary reason that unlabeled data are not often used for training is that, and depending on the classifier, the correct optimal model for semi-supervised classification (i.e., a classifier that learns class membership using both labeled and unlabeled samples) can be far too complicated to develop. In previous work, results were shown based on the fusion of binary classifiers to improve performance in multiclass classification problems. In this case, Bayesian methods were used to fuse binary classifier fusion outputs, while selecting the most relevant classifier pairs to improve the overall classifier decision space. Here, this work is extended by developing new algorithms for improving semi-supervised classification performance. Results are demonstrated with real data form the UCI and Keel Repositories.

  1. A simulation algorithm for ultrasound liver backscattered signals.

    PubMed

    Zatari, D; Botros, N; Dunn, F

    1995-11-01

    In this study, we present a simulation algorithm for the backscattered ultrasound signal from liver tissue. The algorithm simulates backscattered signals from normal liver and three different liver abnormalities. The performance of the algorithm has been tested by statistically comparing the simulated signals with corresponding signals obtained from a previous in vivo study. To verify that the simulated signals can be classified correctly we have applied a classification technique based on an artificial neural network. The acoustic features extracted from the spectrum over a 2.5 MHz bandwidth are the attenuation coefficient and the change of speed of sound with frequency (dispersion). Our results show that the algorithm performs satisfactorily. Further testing of the algorithm is conducted by the use of a data acquisition and analysis system designed by the authors, where several simulated signals are stored in memory chips and classified according to their abnormalities. PMID:8560631

  2. A General Fuzzy Cerebellar Model Neural Network Multidimensional Classifier Using Intuitionistic Fuzzy Sets for Medical Identification.

    PubMed

    Zhao, Jing; Lin, Lo-Yi; Lin, Chih-Min

    2016-01-01

    The diversity of medical factors makes the analysis and judgment of uncertainty one of the challenges of medical diagnosis. A well-designed classification and judgment system for medical uncertainty can increase the rate of correct medical diagnosis. In this paper, a new multidimensional classifier is proposed by using an intelligent algorithm, which is the general fuzzy cerebellar model neural network (GFCMNN). To obtain more information about uncertainty, an intuitionistic fuzzy linguistic term is employed to describe medical features. The solution of classification is obtained by a similarity measurement. The advantages of the novel classifier proposed here are drawn out by comparing the same medical example under the methods of intuitionistic fuzzy sets (IFSs) and intuitionistic fuzzy cross-entropy (IFCE) with different score functions. Cross verification experiments are also taken to further test the classification ability of the GFCMNN multidimensional classifier. All of these experimental results show the effectiveness of the proposed GFCMNN multidimensional classifier and point out that it can assist in supporting for correct medical diagnoses associated with multiple categories. PMID:27298619

  3. A General Fuzzy Cerebellar Model Neural Network Multidimensional Classifier Using Intuitionistic Fuzzy Sets for Medical Identification

    PubMed Central

    Zhao, Jing; Lin, Lo-Yi

    2016-01-01

    The diversity of medical factors makes the analysis and judgment of uncertainty one of the challenges of medical diagnosis. A well-designed classification and judgment system for medical uncertainty can increase the rate of correct medical diagnosis. In this paper, a new multidimensional classifier is proposed by using an intelligent algorithm, which is the general fuzzy cerebellar model neural network (GFCMNN). To obtain more information about uncertainty, an intuitionistic fuzzy linguistic term is employed to describe medical features. The solution of classification is obtained by a similarity measurement. The advantages of the novel classifier proposed here are drawn out by comparing the same medical example under the methods of intuitionistic fuzzy sets (IFSs) and intuitionistic fuzzy cross-entropy (IFCE) with different score functions. Cross verification experiments are also taken to further test the classification ability of the GFCMNN multidimensional classifier. All of these experimental results show the effectiveness of the proposed GFCMNN multidimensional classifier and point out that it can assist in supporting for correct medical diagnoses associated with multiple categories. PMID:27298619

  4. Generating compact classifier systems using a simple artificial immune system.

    PubMed

    Leung, Kevin; Cheong, France; Cheong, Christopher

    2007-10-01

    Current artificial immune system (AIS) classifiers have two major problems: 1) their populations of B-cells can grow to huge proportions, and 2) optimizing one B-cell (part of the classifier) at a time does not necessarily guarantee that the B-cell pool (the whole classifier) will be optimized. In this paper, the design of a new AIS algorithm and classifier system called simple AIS is described. It is different from traditional AIS classifiers in that it takes only one B-cell, instead of a B-cell pool, to represent the classifier. This approach ensures global optimization of the whole system, and in addition, no population control mechanism is needed. The classifier was tested on seven benchmark data sets using different classification techniques and was found to be very competitive when compared to other classifiers. PMID:17926714

  5. Classifying Facial Actions

    PubMed Central

    Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.

    2010-01-01

    The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284

  6. Evaluation and Analysis of SEASAT-A Scanning Multichannel Microwave Radiometer (SSMR) Antenna Pattern Correction (APC) Algorithm. Sub-task 4: Interim Mode T Sub B Versus Cross and Nominal Mode T Sub B

    NASA Technical Reports Server (NTRS)

    Kitzis, J. L.; Kitzis, S. N.

    1979-01-01

    The brightness temperature data produced by the SMMR Antenna Pattern Correction algorithm are evaluated. The evaluation consists of: (1) a direct comparison of the outputs of the interim, cross, and nominal APC modes; (2) a refinement of the previously determined cos beta estimates; and (3) a comparison of the world brightness temperature (T sub B) map with actual SMMR measurements.

  7. Classification of Horse Gaits Using FCM-Based Neuro-Fuzzy Classifier from the Transformed Data Information of Inertial Sensor.

    PubMed

    Lee, Jae-Neung; Lee, Myung-Won; Byeon, Yeong-Hyeon; Lee, Won-Sik; Kwak, Keun-Chang

    2016-01-01

    In this study, we classify four horse gaits (walk, sitting trot, rising trot, canter) of three breeds of horse (Jeju, Warmblood, and Thoroughbred) using a neuro-fuzzy classifier (NFC) of the Takagi-Sugeno-Kang (TSK) type from data information transformed by a wavelet packet (WP). The design of the NFC is accomplished by using a fuzzy c-means (FCM) clustering algorithm that can solve the problem of dimensionality increase due to the flexible scatter partitioning. For this purpose, we use the rider's hip motion from the sensor information collected by inertial sensors as feature data for the classification of a horse's gaits. Furthermore, we develop a coaching system under both real horse riding and simulator environments and propose a method for analyzing the rider's motion. Using the results of the analysis, the rider can be coached in the correct motion corresponding to the classified gait. To construct a motion database, the data collected from 16 inertial sensors attached to a motion capture suit worn by one of the country's top-level horse riding experts were used. Experiments using the original motion data and the transformed motion data were conducted to evaluate the classification performance using various classifiers. The experimental results revealed that the presented FCM-NFC showed a better accuracy performance (97.5%) than a neural network classifier (NNC), naive Bayesian classifier (NBC), and radial basis function network classifier (RBFNC) for the transformed motion data. PMID:27171098

  8. Classification of Horse Gaits Using FCM-Based Neuro-Fuzzy Classifier from the Transformed Data Information of Inertial Sensor

    PubMed Central

    Lee, Jae-Neung; Lee, Myung-Won; Byeon, Yeong-Hyeon; Lee, Won-Sik; Kwak, Keun-Chang

    2016-01-01

    In this study, we classify four horse gaits (walk, sitting trot, rising trot, canter) of three breeds of horse (Jeju, Warmblood, and Thoroughbred) using a neuro-fuzzy classifier (NFC) of the Takagi-Sugeno-Kang (TSK) type from data information transformed by a wavelet packet (WP). The design of the NFC is accomplished by using a fuzzy c-means (FCM) clustering algorithm that can solve the problem of dimensionality increase due to the flexible scatter partitioning. For this purpose, we use the rider’s hip motion from the sensor information collected by inertial sensors as feature data for the classification of a horse’s gaits. Furthermore, we develop a coaching system under both real horse riding and simulator environments and propose a method for analyzing the rider’s motion. Using the results of the analysis, the rider can be coached in the correct motion corresponding to the classified gait. To construct a motion database, the data collected from 16 inertial sensors attached to a motion capture suit worn by one of the country’s top-level horse riding experts were used. Experiments using the original motion data and the transformed motion data were conducted to evaluate the classification performance using various classifiers. The experimental results revealed that the presented FCM-NFC showed a better accuracy performance (97.5%) than a neural network classifier (NNC), naive Bayesian classifier (NBC), and radial basis function network classifier (RBFNC) for the transformed motion data. PMID:27171098

  9. Transionospheric chirp event classifier

    SciTech Connect

    Argo, P.E.; Fitzgerald, T.J.; Freeman, M.J.

    1995-09-01

    In this paper we will discuss a project designed to provide computer recognition of the transionospheric chirps/pulses measured by the Blackbeard (BB) satellite, and expected to be measured by the upcoming FORTE satellite. The Blackbeard data has been perused by human means -- this has been satisfactory for the relatively small amount of data taken by Blackbeard. But with the advent of the FORTE system, which by some accounts might ``see`` thousands of events per day, it is important to provide a software/hardware method of accurately analyzing the data. In fact, we are providing an onboard DSP system for FORTE, which will test the usefulness of our Event Classifier techniques in situ. At present we are constrained to work with data from the Blackbeard satellite, and will discuss the progress made to date.

  10. Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level

    PubMed Central

    Arshad, Sannia; Rho, Seungmin

    2014-01-01

    We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes. PMID:25295302

  11. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.

    1986-01-01

    Systolic algorithms are a class of parallel algorithms, with small grain concurrency, well suited for implementation in VLSI. They are intended to be implemented as high-performance, computation-bound back-end processors and are characterized by a tesselating interconnection of identical processing elements. This dissertation investigates the problem of providing correctness of systolic algorithms. The following are reported in this dissertation: (1) a methodology for verifying correctness of systolic algorithms based on solving the representation of an algorithm as recurrence equations. The methodology is demonstrated by proving the correctness of a systolic architecture for optimal parenthesization. (2) The implementation of mechanical proofs of correctness of two systolic algorithms, a convolution algorithm and an optimal parenthesization algorithm, using the Boyer-Moore theorem prover. (3) An induction principle for proving correctness of systolic arrays which are modular. Two attendant inference rules, weak equivalence and shift transformation, which capture equivalent behavior of systolic arrays, are also presented.

  12. Classifying partner femicide.

    PubMed

    Dixon, Louise; Hamilton-Giachritsis, Catherine; Browne, Kevin

    2008-01-01

    The heterogeneity of domestic violent men has long been established. However, research has failed to examine this phenomenon among men committing the most severe form of domestic violence. This study aims to use a multidimensional approach to empirically construct a classification system of men who are incarcerated for the murder of their female partner based on the Holtzworth-Munroe and Stuart (1994) typology. Ninety men who had been convicted and imprisoned for the murder of their female partner or spouse in England were identified from two prison samples. A content dictionary defining offense and offender characteristics associated with two dimensions of psychopathology and criminality was developed. These variables were extracted from institutional records via content analysis and analyzed for thematic structure using multidimensional scaling procedures. The resultant framework classified 80% (n = 72) of the sample into three subgroups of men characterized by (a) low criminality/low psychopathology (15%), (b) moderate-high criminality/ high psychopathology (36%), and (c) high criminality/low-moderate psychopathology (49%). The latter two groups are akin to Holtzworth-Munroe and Stuart's (1994) generally violent/antisocial and dysphoric/borderline offender, respectively. The implications for intervention, developing consensus in research methodology across the field, and examining typologies of domestic violent men prospectively are discussed. PMID:18087033

  13. Learning algorithms for both real-time detection of solder shorts and for SPC measurement correction using cross-sectional x-ray images of PCBA solder joints

    NASA Astrophysics Data System (ADS)

    Roder, Paul A.

    1994-03-01

    Learning algorithms are introduced for use in the inspection of cross-sectional X-ray images of solder joints. These learning algorithms improve measurement accuracy by accounting for localized shading effects that can occur when inspecting double- sided printed circuit board assemblies. Two specific examples are discussed. The first is an algorithm for detection of solder short defects. The second algorithm utilizes learning to generate more accurate statistical process control measurements.

  14. Chlorophyll-a concentration estimation with three bio-optical algorithms: correction for the low concentration range for the Yiam Reservoir, Korea

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Bio-optical algorithms have been applied to monitor water quality in surface water systems. Empirical algorithms, such as Ritchie (2008), Gons (2008), and Gilerson (2010), have been applied to estimate the chlorophyll-a (chl-a) concentrations. However, the performance of each algorithm severely degr...

  15. A Comparison of Unsupervised Classifiers on BATSE Catalog Data

    NASA Astrophysics Data System (ADS)

    Hakkila, Jon; Roiger, Richard J.; Haglin, David J.; Giblin, Timothy W.; Paciesas, William S.

    2003-04-01

    We classify BATSE gamma-ray bursts using unsupervised clustering algorithms in order to compare classification with statistical clustering techniques. BATSE bursts detected with homogeneous trigger criteria and measured with a limited attribute set (duration, hardness, and fluence) are classified using four unsupervised algorithms (the concept hierarchy classifier ESX, the EM algorithm, the Kmeans algorithm, and a kohonen neural network). The classifiers prefer three-class solutions to two-class and four-class solutions. When forced to find two classes, the classifiers do not find the traditional long and short classes; many short soft events are placed in a class with the short hard bursts. When three classes are found, the classifiers clearly identify the short bursts, but place far more members in an intermediate duration soft class than have been found using statistical clustering techniques. It appears that the boundary between short faint and long bright bursts is more important to the classifiers than is the boundary between short hard and long soft bursts. We conclude that the boundary between short faint and long hard bursts is the result of data bias and poor attribute selection. We recommend that future gamma-ray burst classification avoid using extrinsic parameters such as fluence, and should instead concentrate on intrinsic properties such as spectral, temporal, and (when available) luminosity characteristics. Future classification should also be wary of correlated attributes (such as fluence and duration), as these bias classification results.

  16. Jitter Correction

    NASA Technical Reports Server (NTRS)

    Waegell, Mordecai J.; Palacios, David M.

    2011-01-01

    Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter

  17. Characterization of aluminum hydroxide particles from the Bayer process using neural network and Bayesian classifiers.

    PubMed

    Zaknich, A

    1997-01-01

    An automatic process of isolating and characterizing individual aluminum hydroxide particles from the Bayer process in scanning electron microscope gray-scale images of samples is described. It uses image processing algorithms, neural nets and Bayesian classifiers. As the particles are amorphous and different greatly, there were complex nonlinear decisions and anomalies. The process is in two stages; isolation of particles, and classification of each particle. The isolation process correctly identifies 96.9% of the objects as complete and single particles after a 15.5% rejection of questionable objects. The sample set had a possible 2455 particles taken from 384 256x256-pixel images. Of the 15.5%, 14.2% were correctly rejected. With no rejection the accuracy drops to 91.8% which represents the accuracy of the isolation process alone. The isolated particles are classified by shape, single crystal protrusions, texture, crystal size, and agglomeration. The particle samples were preclassified by a human expert and the data were used to train the five classifiers to embody the expert knowledge. The system was designed to be used as a research tool to determine and study relationships between particle properties and plant parameters in the production of smelting grade alumina by the Bayer process. PMID:18255695

  18. Mapping algorithms on regular parallel architectures

    SciTech Connect

    Lee, P.

    1989-01-01

    It is significant that many of time-intensive scientific algorithms are formulated as nested loops, which are inherently regularly structured. In this dissertation the relations between the mathematical structure of nested loop algorithms and the architectural capabilities required for their parallel execution are studied. The architectural model considered in depth is that of an arbitrary dimensional systolic array. The mathematical structure of the algorithm is characterized by classifying its data-dependence vectors according to the new ZERO-ONE-INFINITE property introduced. Using this classification, the first complete set of necessary and sufficient conditions for correct transformation of a nested loop algorithm onto a given systolic array of an arbitrary dimension by means of linear mappings is derived. Practical methods to derive optimal or suboptimal systolic array implementations are also provided. The techniques developed are used constructively to develop families of implementations satisfying various optimization criteria and to design programmable arrays efficiently executing classes of algorithms. In addition, a Computer-Aided Design system running on SUN workstations has been implemented to help in the design. The methodology, which deals with general algorithms, is illustrated by synthesizing linear and planar systolic array algorithms for matrix multiplication, a reindexed Warshall-Floyd transitive closure algorithm, and the longest common subsequence algorithm.

  19. Reasoning about systolic algorithms

    SciTech Connect

    Purushothaman, S.; Subrahmanyam, P.A.

    1988-12-01

    The authors present a methodology for verifying correctness of systolic algorithms. The methodology is based on solving a set of Uniform Recurrence Equations obtained from a description of systolic algorithms as a set of recursive equations. They present an approach to mechanically verify correctness of systolic algorithms, using the Boyer-Moore theorem proven. A mechanical correctness proof of an example from the literature is also presented.

  20. Visual Classifier Training for Text Document Retrieval.

    PubMed

    Heimerl, F; Koch, S; Bosch, H; Ertl, T

    2012-12-01

    Performing exhaustive searches over a large number of text documents can be tedious, since it is very hard to formulate search queries or define filter criteria that capture an analyst's information need adequately. Classification through machine learning has the potential to improve search and filter tasks encompassing either complex or very specific information needs, individually. Unfortunately, analysts who are knowledgeable in their field are typically not machine learning specialists. Most classification methods, however, require a certain expertise regarding their parametrization to achieve good results. Supervised machine learning algorithms, in contrast, rely on labeled data, which can be provided by analysts. However, the effort for labeling can be very high, which shifts the problem from composing complex queries or defining accurate filters to another laborious task, in addition to the need for judging the trained classifier's quality. We therefore compare three approaches for interactive classifier training in a user study. All of the approaches are potential candidates for the integration into a larger retrieval system. They incorporate active learning to various degrees in order to reduce the labeling effort as well as to increase effectiveness. Two of them encompass interactive visualization for letting users explore the status of the classifier in context of the labeled documents, as well as for judging the quality of the classifier in iterative feedback loops. We see our work as a step towards introducing user controlled classification methods in addition to text search and filtering for increasing recall in analytics scenarios involving large corpora. PMID:26357193

  1. A three-parameter model for classifying anurans into four genera based on advertisement calls.

    PubMed

    Gingras, Bruno; Fitch, William Tecumseh

    2013-01-01

    The vocalizations of anurans are innate in structure and may therefore contain indicators of phylogenetic history. Thus, advertisement calls of species which are more closely related phylogenetically are predicted to be more similar than those of distant species. This hypothesis was evaluated by comparing several widely used machine-learning algorithms. Recordings of advertisement calls from 142 species belonging to four genera were analyzed. A logistic regression model, using mean values for dominant frequency, coefficient of variation of root-mean square energy, and spectral flux, correctly classified advertisement calls with regard to genus with an accuracy above 70%. Similar accuracy rates were obtained using these parameters with a support vector machine model, a K-nearest neighbor algorithm, and a multivariate Gaussian distribution classifier, whereas a Gaussian mixture model performed slightly worse. In contrast, models based on mel-frequency cepstral coefficients did not fare as well. Comparable accuracy levels were obtained on out-of-sample recordings from 52 of the 142 original species. The results suggest that a combination of low-level acoustic attributes is sufficient to discriminate efficiently between the vocalizations of these four genera, thus supporting the initial premise and validating the use of high-throughput algorithms on animal vocalizations to evaluate phylogenetic hypotheses. PMID:23297926

  2. An Automated Neural Network Cloud Classifier for Use over Land and Ocean Surfaces.

    NASA Astrophysics Data System (ADS)

    Miller, Shawn W.; Emery, William J.

    1997-10-01

    An automated neural network cloud classifier that functions over both land and ocean backgrounds is presented. Motivated by the development of a combined visible, infrared, and microwave rain-rate retrieval algorithm for use with data from the 1997 Tropical Rainfall Measuring Mission (TRMM), an automated cloud classification technique is sought to discern different types of clouds and, hence, different types of precipitating systems from Advanced Very High Resolution Radiometer (AVHRR) type imagery. When this technique is applied to TRMM visible-infrared imagery, it will allow the choice of a passive microwave rain-rate algorithm, which performs well for the observed precipitation type, theoretically increasing accuracy at the instantaneous level when compared with the use of any single microwave algorithm. A neural network classifier, selected because of the strengths of neural networks with respect to within-class variability and nonnormal cluster distributions, is developed, trained, and tested on AVHRR data received from three different polar-orbiting satellites and spanning the continental United States and adjacent waters, as well as portions of the Tropics from the Tropical Ocean and Global Atmosphere Coupled Ocean-Atmosphere Response Experiment (TOGA COARE). The results are analyzed and suggestions are made for future work on this technique. The network selected the correct class for 96% of the training samples and 82% of the test samples, indicating that this type of approach to automated cloud classification holds considerable promise and is worthy of additional research and refinement.

  3. Sensitivity of Satellite-Based Skin Temperature to Different Surface Emissivity and NWP Reanalysis Sources Demonstrated Using a Single-Channel, Viewing-Angle-Corrected Retrieval Algorithm

    NASA Astrophysics Data System (ADS)

    Scarino, B. R.; Minnis, P.; Yost, C. R.; Chee, T.; Palikonda, R.

    2015-12-01

    Single-channel algorithms for satellite thermal-infrared- (TIR-) derived land and sea surface skin temperature (LST and SST) are advantageous in that they can be easily applied to a variety of satellite sensors. They can also accommodate decade-spanning instrument series, particularly for periods when split-window capabilities are not available. However, the benefit of one unified retrieval methodology for all sensors comes at the cost of critical sensitivity to surface emissivity (ɛs) and atmospheric transmittance estimation. It has been demonstrated that as little as 0.01 variance in ɛs can amount to more than a 0.5-K adjustment in retrieved LST values. Atmospheric transmittance requires calculations that employ vertical profiles of temperature and humidity from numerical weather prediction (NWP) models. Selection of a given NWP model can significantly affect LST and SST agreement relative to their respective validation sources. Thus, it is necessary to understand the accuracies of the retrievals for various NWP models to ensure the best LST/SST retrievals. The sensitivities of the single-channel retrievals to surface emittance and NWP profiles are investigated using NASA Langley historic land and ocean clear-sky skin temperature (Ts) values derived from high-resolution 11-μm TIR brightness temperature measured from geostationary satellites (GEOSat) and Advanced Very High Resolution Radiometers (AVHRR). It is shown that mean GEOSat-derived, anisotropy-corrected LST can vary by up to ±0.8 K depending on whether CERES or MODIS ɛs sources are used. Furthermore, the use of either NOAA Global Forecast System (GFS) or NASA Goddard Modern-Era Retrospective Analysis for Research and Applications (MERRA) for the radiative transfer model initial atmospheric state can account for more than 0.5-K variation in mean Ts. The results are compared to measurements from the Surface Radiation Budget Network (SURFRAD), an Atmospheric Radiation Measurement (ARM) Program ground

  4. Emergent behaviors of classifier systems

    SciTech Connect

    Forrest, S.; Miller, J.H.

    1989-01-01

    This paper discusses some examples of emergent behavior in classifier systems, describes some recently developed methods for studying them based on dynamical systems theory, and presents some initial results produced by the methodology. The goal of this work is to find techniques for noticing when interesting emergent behaviors of classifier systems emerge, to study how such behaviors might emerge over time, and make suggestions for designing classifier systems that exhibit preferred behaviors. 20 refs., 1 fig.

  5. Classifying Objects Of Continuous Feature Variability: When Do We Stop Classifying?

    NASA Astrophysics Data System (ADS)

    Okagaki, Takashi

    1988-10-01

    Pattern recognition by a computer assumes that there is a correct answer in classifying the objects to which we can make reference for correctness of recognition. Classification of a set of objects may have absolutely correct answers when the objects are artifacts (e.g. bolts vs nuts) or highly evolved biological species. However, classification of many other objects is arbitrary (e.g. color, clouds), and is frequently a subject of cultural bias. For instance, traffic lights consist of red, yellow and green in the U.S.A.; they are perceived as red, yellow and blue by Japanese. When human bias is involved in classification, a natural solution is to set a panel of human "experts" and concensus of the panel is assumed to be the correct classification. For instance, expert interior decorators can define classification of different colors and hues, and performance of a machine is tested against the reference set by human experts.

  6. Multiconlitron: a general piecewise linear classifier.

    PubMed

    Yujian, Li; Bo, Liu; Xinwu, Yang; Yaozong, Fu; Houjun, Li

    2011-02-01

    Based on the "convexly separable" concept, we present a solid geometric theory and a new general framework to design piecewise linear classifiers for two arbitrarily complicated nonintersecting classes by using a "multiconlitron," which is a union of multiple conlitrons that comprise a set of hyperplanes or linear functions surrounding a convex region for separating two convexly separable datasets. We propose a new iterative algorithm called the cross distance minimization algorithm (CDMA) to compute hard margin non-kernel support vector machines (SVMs) via the nearest point pair between two convex polytopes. Using CDMA, we derive two new algorithms, i.e., the support conlitron algorithm (SCA) and the support multiconlitron algorithm (SMA) to construct support conlitrons and support multiconlitrons, respectively, which are unique and can separate two classes by a maximum margin as in an SVM. Comparative experiments show that SMA can outperform linear SVM on many of the selected databases and provide similar results to radial basis function SVM on some of them, while SCA performs better than linear SVM on three out of four applicable databases. Other experiments show that SMA and SCA may be further improved to draw more potential in the new research direction of piecewise linear learning. PMID:21138800

  7. The Effects of Observation of Learn Units during Reinforcement and Correction Conditions on the Rate of Learning Math Algorithms by Fifth Grade Students

    ERIC Educational Resources Information Center

    Neu, Jessica Adele

    2013-01-01

    I conducted two studies on the comparative effects of the observation of learn units during (a) reinforcement or (b) correction conditions on the acquisition of math objectives. The dependent variables were the within-session cumulative numbers of correct responses emitted during observational sessions. The independent variables were the…

  8. Learnability of min-max pattern classifiers

    NASA Astrophysics Data System (ADS)

    Yang, Ping-Fai; Maragos, Petros

    1991-11-01

    This paper introduces the class of thresholded min-max functions and studies their learning under the probably approximately correct (PAC) model introduced by Valiant. These functions can be used as pattern classifiers of both real-valued and binary-valued feature vectors. They are a lattice-theoretic generalization of Boolean functions and are also related to three-layer perceptrons and morphological signal operators. Several subclasses of the thresholded min- max functions are shown to be learnable under the PAC model.

  9. Automatic red eye correction and its quality metric

    NASA Astrophysics Data System (ADS)

    Safonov, Ilia V.; Rychagov, Michael N.; Kang, KiMin; Kim, Sang Ho

    2008-01-01

    The red eye artifacts are troublesome defect of amateur photos. Correction of red eyes during printing without user intervention and making photos more pleasant for an observer are important tasks. The novel efficient technique of automatic correction of red eyes aimed for photo printers is proposed. This algorithm is independent from face orientation and capable to detect paired red eyes as well as single red eyes. The approach is based on application of 3D tables with typicalness levels for red eyes and human skin tones and directional edge detection filters for processing of redness image. Machine learning is applied for feature selection. For classification of red eye regions a cascade of classifiers including Gentle AdaBoost committee from Classification and Regression Trees (CART) is applied. Retouching stage includes desaturation, darkening and blending with initial image. Several versions of approach implementation using trade-off between detection and correction quality, processing time, memory volume are possible. The numeric quality criterion of automatic red eye correction is proposed. This quality metric is constructed by applying Analytic Hierarchy Process (AHP) for consumer opinions about correction outcomes. Proposed numeric metric helped to choose algorithm parameters via optimization procedure. Experimental results demonstrate high accuracy and efficiency of the proposed algorithm in comparison with existing solutions.

  10. Remote Sensing Data Binary Classification Using Boosting with Simple Classifiers

    NASA Astrophysics Data System (ADS)

    Nowakowski, Artur

    2015-10-01

    Boosting is a classification method which has been proven useful in non-satellite image processing while it is still new to satellite remote sensing. It is a meta-algorithm, which builds a strong classifier from many weak ones in iterative way. We adapt the AdaBoost.M1 boosting algorithm in a new land cover classification scenario based on utilization of very simple threshold classifiers employing spectral and contextual information. Thresholds for the classifiers are automatically calculated adaptively to data statistics. The proposed method is employed for the exemplary problem of artificial area identification. Classification of IKONOS multispectral data results in short computational time and overall accuracy of 94.4% comparing to 94.0% obtained by using AdaBoost.M1 with trees and 93.8% achieved using Random Forest. The influence of a manipulation of the final threshold of the strong classifier on classification results is reported.

  11. Structure-based algorithms for microvessel classification

    PubMed Central

    Smith, Amy F.; Secomb, Timothy W.; Pries, Axel R.; Smith, Nicolas P.; Shipley, Rebecca J.

    2014-01-01

    Objective Recent developments in high-resolution imaging techniques have enabled digital reconstruction of three-dimensional sections of microvascular networks down to the capillary scale. To better interpret these large data sets, our goal is to distinguish branching trees of arterioles and venules from capillaries. Methods Two novel algorithms are presented for classifying vessels in microvascular anatomical data sets without requiring flow information. The algorithms are compared with a classification based on observed flow directions (considered the gold standard), and with an existing resistance-based method that relies only on structural data. Results The first algorithm, developed for networks with one arteriolar and one venular tree, performs well in identifying arterioles and venules and is robust to parameter changes, but incorrectly labels a significant number of capillaries as arterioles or venules. The second algorithm, developed for networks with multiple inlets and outlets, correctly identifies more arterioles and venules, but is more sensitive to parameter changes. Conclusions The algorithms presented here can be used to classify microvessels in large microvascular data sets lacking flow information. This provides a basis for analyzing the distinct geometrical properties and modelling the functional behavior of arterioles, capillaries and venules. PMID:25403335

  12. ASE Floodwater Classifier Development for EO-1 Hyperion Imagery

    NASA Technical Reports Server (NTRS)

    Ip, Felipe; Dohm, J. M.; Baker, V. R.; Doggett, T.; Davies, A. G.; Castano, B.; Chien, S.; Cichy, B.; Greeley, R.; Sherwood, R.

    2004-01-01

    The objective of this investigation is to develop a prototype floodwater detection algorithm for Hyperion imagery. It will be run autonomously onboard the EO-1 spacecraft under the Autonomous Sciencecraft Experiment (ASE). This effort resulted in the development of two classifiers for floodwater, one of several classifier types that have been developed and will be uploaded to EO-1 in early 2004 in order to detect change related to transient processes such as volcanism, flooding, and ice formation and retreat.

  13. DECISION TREE CLASSIFIERS FOR STAR/GALAXY SEPARATION

    SciTech Connect

    Vasconcellos, E. C.; Ruiz, R. S. R.; De Carvalho, R. R.; Capelato, H. V.; Gal, R. R.; LaBarbera, F. L.; Frago Campos Velho, H.; Trevisan, M.

    2011-06-15

    We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS-DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: 14 {<=} r {<=} 21 (85.2%) and r {>=} 19 (82.1%). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT, and Ball et al. We find that our FT classifier is comparable to or better in completeness over the full magnitude range 15 {<=} r {<=} 21, with much lower contamination than all but the Ball et al. classifier. At the faintest magnitudes (r > 19), our classifier is the only one that maintains high completeness (>80%) while simultaneously achieving low contamination ({approx}2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 {<=} r {<=} 21.

  14. Classifying Multi-year Land Use and Land Cover using Deep Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Seo, B.

    2015-12-01

    Cultivated ecosystems constitute a particularly frequent form of human land use. Long-term management of a cultivated ecosystem requires us to know temporal change of land use and land cover (LULC) of the target system. Land use and land cover changes (LUCC) in agricultural ecosystem is often rapid and unexpectedly occurs. Thus, longitudinal LULC is particularly needed to examine trends of ecosystem functions and ecosystem services of the target system. Multi-temporal classification of land use and land cover (LULC) in complex heterogeneous landscape remains a challenge. Agricultural landscapes often made up of a mosaic of numerous LULC classes, thus spatial heterogeneity is large. Moreover, temporal and spatial variation within a LULC class is also large. Under such a circumstance, standard classifiers would fail to identify the LULC classes correctly due to the heterogeneity of the target LULC classes. Because most standard classifiers search for a specific pattern of features for a class, they fail to detect classes with noisy and/or transformed feature data sets. Recently, deep learning algorithms have emerged in the machine learning communities and shown superior performance on a variety of tasks, including image classification and object recognition. In this paper, we propose to use convolutional neural networks (CNN) to learn from multi-spectral data to classify agricultural LULC types. Based on multi-spectral satellite data, we attempted to classify agricultural LULC classes in Soyang watershed, South Korea for the three years' study period (2009-2011). The classification performance of support vector machine (SVM) and CNN classifiers were compared for different years. Preliminary results demonstrate that the proposed method can improve classification performance compared to the SVM classifier. The SVM classifier failed to identify classes when trained on a year to predict another year, whilst CNN could reconstruct LULC maps of the catchment over the study

  15. Pattern classifier for health monitoring of helicopter gearboxes

    NASA Technical Reports Server (NTRS)

    Chin, Hsinyung; Danai, Kourosh; Lewicki, David G.

    1993-01-01

    The application of a newly developed diagnostic method to a helicopter gearbox is demonstrated. This method is a pattern classifier which uses a multi-valued influence matrix (MVIM) as its diagnostic model. The method benefits from a fast learning algorithm, based on error feedback, that enables it to estimate gearbox health from a small set of measurement-fault data. The MVIM method can also assess the diagnosability of the system and variability of the fault signatures as the basis to improve fault signatures. This method was tested on vibration signals reflecting various faults in an OH-58A main rotor transmission gearbox. The vibration signals were then digitized and processed by a vibration signal analyzer to enhance and extract various features of the vibration data. The parameters obtained from this analyzer were utilized to train and test the performance of the MVIM method in both detection and diagnosis. The results indicate that the MVIM method provided excellent detection results when the full range of faults effects on the measurements were included in training, and it had a correct diagnostic rate of 95 percent when the faults were included in training.

  16. Innovative use of DSP technology in space: FORTE event classifier

    SciTech Connect

    Briles, S.; Moore, K. Jones, R.; Klingner, P.; Neagley, D.; Caffrey, M.; Henneke, K.; Spurgen, W.; Blain, P.

    1994-08-01

    The Fast On-Orbit Recording of Transient Events (FORTE) small satellite will field a digital signal processor (DSP) experiment for the purpose of classifying radio-frequency (rf) transient signals propagating through the earth`s ionosphere. Designated the Event Classifier experiment, this DSP experiment uses a single Texas Instruments` SMJ320C30 DSP to execute preprocessing, feature extraction, and classification algorithms on down-converted, digitized, and buffered rf transient signals in the frequency range of 30 to 300 MHz. A radiation-hardened microcontroller monitors DSP- abnormalities and supervises spacecraft command communications. On- orbit evaluation of multiple algorithms is supported by the Event Classifier architecture. Ground-based commands determine the subset and sequence of algorithms executed to classify a captured time series. Conventional neural network classification algorithms will be some of the classification techniques implemented on-board FORTE while in a low-earth orbit. Results of all experiments, after being stored in DSP flash memory, will be transmitted through the spacecraft to ground stations. The Event Classifier is a versatile and fault-tolerant experiment that is an important new space-based application of DSP technology.

  17. An ensemble of SVM classifiers based on gene pairs.

    PubMed

    Tong, Muchenxuan; Liu, Kun-Hong; Xu, Chungui; Ju, Wenbin

    2013-07-01

    In this paper, a genetic algorithm (GA) based ensemble support vector machine (SVM) classifier built on gene pairs (GA-ESP) is proposed. The SVMs (base classifiers of the ensemble system) are trained on different informative gene pairs. These gene pairs are selected by the top scoring pair (TSP) criterion. Each of these pairs projects the original microarray expression onto a 2-D space. Extensive permutation of gene pairs may reveal more useful information and potentially lead to an ensemble classifier with satisfactory accuracy and interpretability. GA is further applied to select an optimized combination of base classifiers. The effectiveness of the GA-ESP classifier is evaluated on both binary-class and multi-class datasets. PMID:23668348

  18. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    NASA Astrophysics Data System (ADS)

    Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.

    2014-03-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.

  19. Classifying Chondrules Based on Cathodoluminesence

    NASA Astrophysics Data System (ADS)

    Cristarela, T. C.; Sears, D. W.

    2011-03-01

    Sears et al. (1991) proposed a scheme to classify chondrules based on cathodoluminesence color and electron microprobe analysis. This research evaluates that scheme and criticisms received from Grossman and Brearley (2005).

  20. RECIPES FOR WRITING ALGORITHMS FOR ATMOSPHERIC CORRECTIONS AND TEMPERATURE/EMISSIVITY SEPARATIONS IN THE THERMAL REGIME FOR A MULTI-SPECTRAL SENSOR

    SciTech Connect

    C. BOREL; W. CLODIUS

    2001-04-01

    This paper discusses the algorithms created for the Multi-spectral Thermal Imager (MTI) to retrieve temperatures and emissivities. Recipes to create the physics based water temperature retrieval, emissivity of water surfaces are described. A simple radiative transfer model for multi-spectral sensors is developed. A method to create look-up-tables and the criterion of finding the optimum water temperature are covered. Practical aspects such as conversion from band-averaged radiances to brightness temperatures and effects of variations in the spectral response on the atmospheric transmission are discussed. A recipe for a temperature/emissivity separation algorithm when water surfaces are present is given. Results of retrievals of skin water temperatures are compared with in-situ measurements of the bulk water temperature at two locations are shown.

  1. Application of successive test feature classifier to dynamic recognition problems

    NASA Astrophysics Data System (ADS)

    Sakata, Yukinobu; Kaneko, Shun'ichi; Tanaka, Takayuki

    2005-12-01

    A novel successive learning algorithm is proposed for efficiently handling sequentially provided training data based on Test Feature Classifier (TFC), which is non-parametric and effective even for small data. We have proposed a novel classifier TFC utilizing prime test features (PTF) which is combination feature subsets for getting excellent performance. TFC has characteristics as follows: non-parametric learning, no mis-classification of training data. And then, in some real-world problems, the effectiveness of TFC is confirmed through way applications. However, TFC has a problem that it must be reconstructed even when any sub-set of data is changed. In the successive learning, after recognition of a set of unknown objects, they are fed into the classifier in order to obtain a modified classifier. We propose an efficient algorithm for reconstruction of PTFs, which is formalized in cases of addition and deletion of training data. In the verification experiment, using the successive learning algorithm, we can save about 70% on the total computational cost in comparison with a batch learning. We applied the proposed successive TFC to dynamic recognition problems from which the characteristic of training data changes with progress of time, and examine the characteristic by the fundamental experiments. Support Vector Machine (SVM) which is well established in algorithm and on practical application, was compared with the proposed successive TFC. And successive TFC indicated high performance compared with SVM.

  2. Scoring and Classifying Examinees Using Measurement Decision Theory

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.

    2009-01-01

    This paper describes and evaluates the use of measurement decision theory (MDT) to classify examinees based on their item response patterns. The model has a simple framework that starts with the conditional probabilities of examinees in each category or mastery state responding correctly to each item. The presented evaluation investigates: (1) the…

  3. Robust Algorithm for Systematic Classification of Malaria Late Treatment Failures as Recrudescence or Reinfection Using Microsatellite Genotyping.

    PubMed

    Plucinski, Mateusz M; Morton, Lindsay; Bushman, Mary; Dimbu, Pedro Rafael; Udhayakumar, Venkatachalam

    2015-10-01

    Routine therapeutic efficacy monitoring to measure the response to antimalarial treatment is a cornerstone of malaria control. To correctly measure drug efficacy, therapeutic efficacy studies require genotyping parasites from late treatment failures to differentiate between recrudescent infections and reinfections. However, there is a lack of statistical methods to systematically classify late treatment failures from genotyping data. A Bayesian algorithm was developed to estimate the posterior probability of late treatment failure being the result of a recrudescent infection from microsatellite genotyping data. The algorithm was implemented using a Monte Carlo Markov chain approach and was used to classify late treatment failures using published microsatellite data from therapeutic efficacy studies in Ethiopia and Angola. The algorithm classified 85% of the Ethiopian and 95% of the Angolan late treatment failures as either likely reinfection or likely recrudescence, defined as a posterior probability of recrudescence of <0.1 or >0.9, respectively. The adjusted efficacies calculated using the new algorithm differed from efficacies estimated using commonly used methods for differentiating recrudescence from reinfection. In a high-transmission setting such as Angola, as few as 15 samples needed to be genotyped in order to have enough power to correctly classify treatment failures. Analysis of microsatellite genotyping data for differentiating between recrudescence and reinfection benefits from an approach that both systematically classifies late treatment failures and estimates the uncertainty of these classifications. Researchers analyzing genotyping data from antimalarial therapeutic efficacy monitoring are urged to publish their raw genetic data and to estimate the uncertainty around their classification. PMID:26195521

  4. Evaluating Multimembership Classifiers: A Methodology and Application to the MEDAS Diagnostic System.

    PubMed

    Ben-Bassat, M; Campell, D B; Macneil, A R; Weil, M H

    1983-02-01

    Performance evaluation measures for multimembership classifiers are presented and applied in a retrospective study on the diagnostic performance of the MEDAS (Medical Emergency Decision Assistance System) system. Admission and discharge diagnoses for 122 patients with one or more of 26 distinct disorders in five major disorder categories were gathered. The average number of disorders per patient was 2 with 36 (29.5 percent) patients having 3 or more disorders simultaneously. The features (symptoms, signs, and laboratory data) available at admission were entered into a multimembership Bayesian pattern recognition algorithm which permits for diagnosis of multiple disorders. When the top five computer-ranked diagnoses were considered, all of the correct diagnoses for 86.1 percent of the patients were displayed by the fifth position. In 71.6 percent of these cases, no false diagnosis preceded any correct diagnosis. In ten cases a discharge diagnosis which was suggested by the available findings was omitted by the admitting physician. In six of these ten cases, the overlooked diagnoses appeared at the computer ranked list above all false diagnoses. Considering the urgency of diagnosis in the Emergency Department, the high uncertainty involved due to the limited availability of data, and the high frequency with which multiple disorders coexist, this limited study encourages our confidence in the MEDAS knowledge base and algorithm as a useful diagnostic support tool. PMID:21869106

  5. Method for classifying ceramic powder

    NASA Technical Reports Server (NTRS)

    Takabe, K.

    1983-01-01

    Under the invented method, powder A of particles of less than 10 microns, and carrier powder B, whose average particle diameter is more than five times that of powder A, are premixed so that the powder is less than 40 wt.% of the total mixture, before classifying.

  6. The Classified Catalogue: LU Style

    ERIC Educational Resources Information Center

    Wong, C.-C.; Mount, Joan

    1971-01-01

    The Laurentian University Library has evolved a bilingual classified catalogue consisting of a public shelflist supplement by a French/English subject index. This produces an effective tool for locating all materials pertaining to a given topic in either or both of two languages. (Author/NH)

  7. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  8. Unresolved Galaxy Classifier for ESA's Gaia Mission

    NASA Astrophysics Data System (ADS)

    Bellas-Velidis, I.; Kontizas, M.; Livanou, E.; Tsalmantza, P.

    2010-07-01

    Unresolved Galaxy Classifier (UGC), a software package for the ground-based pipeline of ESA’s Gaia mission is presented. It aims at analyzing Gaia BP/RP spectra of unresolved galaxies, to provide taxonomic classification and specific parameters estimation. The UGC algorithm is based on Support Vector Machines, a supervised learning technique. The software is implemented in JAVA. An offline UGC-learning module provides functions for SVM-model training. Once trained, the set of models can be repeatedly applied to unknown galaxy spectra by the pipeline’s UGC-application module. Tests with a library of BP/RP simulated galaxy spectra show a very good performance of UGC.

  9. A Systematic Comparison of Supervised Classifiers

    PubMed Central

    Amancio, Diego Raphael; Comin, Cesar Henrique; Casanova, Dalcimar; Travieso, Gonzalo; Bruno, Odemir Martinez; Rodrigues, Francisco Aparecido; da Fontoura Costa, Luciano

    2014-01-01

    Pattern recognition has been employed in a myriad of industrial, commercial and academic applications. Many techniques have been devised to tackle such a diversity of applications. Despite the long tradition of pattern recognition research, there is no technique that yields the best classification in all scenarios. Therefore, as many techniques as possible should be considered in high accuracy applications. Typical related works either focus on the performance of a given algorithm or compare various classification methods. In many occasions, however, researchers who are not experts in the field of machine learning have to deal with practical classification tasks without an in-depth knowledge about the underlying parameters. Actually, the adequate choice of classifiers and parameters in such practical circumstances constitutes a long-standing problem and is one of the subjects of the current paper. We carried out a performance study of nine well-known classifiers implemented in the Weka framework and compared the influence of the parameter configurations on the accuracy. The default configuration of parameters in Weka was found to provide near optimal performance for most cases, not including methods such as the support vector machine (SVM). In addition, the k-nearest neighbor method frequently allowed the best accuracy. In certain conditions, it was possible to improve the quality of SVM by more than 20% with respect to their default parameter configuration. PMID:24763312

  10. Objectively classifying Southern Hemisphere extratropical cyclones

    NASA Astrophysics Data System (ADS)

    Catto, Jennifer

    2016-04-01

    There has been a long tradition in attempting to separate extratropical cyclones into different classes depending on their cloud signatures, airflows, synoptic precursors, or upper-level flow features. Depending on these features, the cyclones may have different impacts, for example in their precipitation intensity. It is important, therefore, to understand how the distribution of different cyclone classes may change in the future. Many of the previous classifications have been performed manually. In order to be able to evaluate climate models and understand how extratropical cyclones might change in the future, we need to be able to use an automated method to classify cyclones. Extratropical cyclones have been identified in the Southern Hemisphere from the ERA-Interim reanalysis dataset with a commonly used identification and tracking algorithm that employs 850 hPa relative vorticity. A clustering method applied to large-scale fields from ERA-Interim at the time of cyclone genesis (when the cyclone is first detected), has been used to objectively classify identified cyclones. The results are compared to the manual classification of Sinclair and Revell (2000) and the four objectively identified classes shown in this presentation are found to match well. The relative importance of diabatic heating in the clusters is investigated, as well as the differing precipitation characteristics. The success of the objective classification shows its utility in climate model evaluation and climate change studies.

  11. Integrated One-Against-One Classifiers as Tools for Virtual Screening of Compound Databases: A Case Study with CNS Inhibitors.

    PubMed

    Jalali-Heravi, Mehdi; Mani-Varnosfaderani, Ahmad; Valadkhani, Abolfazl

    2013-08-01

    A total of 21 833 inhibitors of the central nervous system (CNS) were collected from Binding-database and analyzed using discriminant analysis (DA) techniques. A combination of genetic algorithm and quadratic discriminant analysis (GA-QDA) was proposed as a tool for the classification of molecules based on their therapeutic targets and activities. The results indicated that the one-against-one (OAO) QDA classifiers correctly separate the molecules based on their therapeutic targets and are comparable with support vector machines. These classifiers help in charting the chemical space of the CNS inhibitors and finding specific subspaces occupied by particular classes of molecules. As a next step, the classification models were used as virtual filters for screening of random subsets of PUBCHEM and ZINC databases. The calculated enrichment factors together with the area under curve values of receiver operating characteristic curves showed that these classifiers are good candidates to speed up the early stages of drug discovery projects. The "relative distances" of the center of active classes of biosimilar molecules calculated by OAO classifiers were used as indices for sorting the compound databases. The results revealed that, the multiclass classification models in this work circumvent the definition inactive sets for virtual screening and are useful for compound retrieval analysis in Chemoinformatics. PMID:27480066

  12. 48 CFR 1803.907 - Classified information.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 6 2014-10-01 2014-10-01 false Classified information... Whistleblower Protections 1803.907 Classified information. Nothing in this subpart provides any rights to disclose classified information not otherwise provided by law....

  13. 75 FR 705 - Classified National Security Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-05

    ... Executive Order 13526--Classified National Security Information Memorandum of December 29, 2009--Implementation of the Executive Order ``Classified National Security Information'' Order of December 29, 2009... ] Executive Order 13526 of December 29, 2009 Classified National Security Information This order prescribes...

  14. 76 FR 34761 - Classified National Security Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-14

    ... Classified National Security Information AGENCY: Marine Mammal Commission. ACTION: Notice. SUMMARY: This... information, as directed by Information Security Oversight Office regulations. FOR FURTHER INFORMATION CONTACT..., ``Classified National Security Information,'' and 32 CFR part 2001, ``Classified National Security...

  15. Fault tolerance of SVM algorithm for hyperspectral image

    NASA Astrophysics Data System (ADS)

    Cui, Yabo; Yuan, Zhengwu; Wu, Yuanfeng; Gao, Lianru; Zhang, Hao

    2015-10-01

    One of the most important tasks in analyzing hyperspectral image data is the classification process[1]. In general, in order to enhance the classification accuracy, a data preprocessing step is usually adopted to remove the noise in the data before classification. But for the time-sensitive applications, we hope that even the data contains noise the classifier can still appear to execute correctly from the user's perspective, such as risk prevention and response. As the most popular classifier, Support Vector Machine (SVM) has been widely used for hyperspectral image classification and proved to be a very promising technique in supervised classification[2]. In this paper, two experiments are performed to demonstrate that for the hyperspectral data with noise, if the noise of the data is within a certain range, SVM algorithm is still able to execute correctly from the user's perspective.

  16. Classifying seismic waveforms from scratch: a case study in the alpine environment

    NASA Astrophysics Data System (ADS)

    Hammer, C.; Ohrnberger, M.; Fäh, D.

    2013-01-01

    Nowadays, an increasing amount of seismic data is collected by daily observatory routines. The basic step for successfully analyzing those data is the correct detection of various event types. However, the visually scanning process is a time-consuming task. Applying standard techniques for detection like the STA/LTA trigger still requires the manual control for classification. Here, we present a useful alternative. The incoming data stream is scanned automatically for events of interest. A stochastic classifier, called hidden Markov model, is learned for each class of interest enabling the recognition of highly variable waveforms. In contrast to other automatic techniques as neural networks or support vector machines the algorithm allows to start the classification from scratch as soon as interesting events are identified. Neither the tedious process of collecting training samples nor a time-consuming configuration of the classifier is required. An approach originally introduced for the volcanic task force action allows to learn classifier properties from a single waveform example and some hours of background recording. Besides a reduction of required workload this also enables to detect very rare events. Especially the latter feature provides a milestone point for the use of seismic devices in alpine warning systems. Furthermore, the system offers the opportunity to flag new signal classes that have not been defined before. We demonstrate the application of the classification system using a data set from the Swiss Seismological Survey achieving very high recognition rates. In detail we document all refinements of the classifier providing a step-by-step guide for the fast set up of a well-working classification system.

  17. RFMirTarget: Predicting Human MicroRNA Target Genes with a Random Forest Classifier

    PubMed Central

    Mendoza, Mariana R.; da Fonseca, Guilherme C.; Loss-Morais, Guilherme; Alves, Ronnie; Margis, Rogerio; Bazzan, Ana L. C.

    2013-01-01

    MicroRNAs are key regulators of eukaryotic gene expression whose fundamental role has already been identified in many cell pathways. The correct identification of miRNAs targets is still a major challenge in bioinformatics and has motivated the development of several computational methods to overcome inherent limitations of experimental analysis. Indeed, the best results reported so far in terms of specificity and sensitivity are associated to machine learning-based methods for microRNA-target prediction. Following this trend, in the current paper we discuss and explore a microRNA-target prediction method based on a random forest classifier, namely RFMirTarget. Despite its well-known robustness regarding general classifying tasks, to the best of our knowledge, random forest have not been deeply explored for the specific context of predicting microRNAs targets. Our framework first analyzes alignments between candidate microRNA-target pairs and extracts a set of structural, thermodynamics, alignment, seed and position-based features, upon which classification is performed. Experiments have shown that RFMirTarget outperforms several well-known classifiers with statistical significance, and that its performance is not impaired by the class imbalance problem or features correlation. Moreover, comparing it against other algorithms for microRNA target prediction using independent test data sets from TarBase and starBase, we observe a very promising performance, with higher sensitivity in relation to other methods. Finally, tests performed with RFMirTarget show the benefits of feature selection even for a classifier with embedded feature importance analysis, and the consistency between relevant features identified and important biological properties for effective microRNA-target gene alignment. PMID:23922946

  18. Clustering signatures classify directed networks

    NASA Astrophysics Data System (ADS)

    Ahnert, S. E.; Fink, T. M. A.

    2008-09-01

    We use a clustering signature, based on a recently introduced generalization of the clustering coefficient to directed networks, to analyze 16 directed real-world networks of five different types: social networks, genetic transcription networks, word adjacency networks, food webs, and electric circuits. We show that these five classes of networks are cleanly separated in the space of clustering signatures due to the statistical properties of their local neighborhoods, demonstrating the usefulness of clustering signatures as a classifier of directed networks.

  19. Classifying auroras using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Rydesater, Peter; Brandstrom, Urban; Steen, Ake; Gustavsson, Bjorn

    1999-03-01

    In Auroral Large Imaging System (ALIS) there is need of stable methods for analysis and classification of auroral images and images with for example mother of pearl clouds. This part of ALIS is called Selective Imaging Techniques (SIT) and is intended to sort out images of scientific interest. It's also used to find out what and where in the images there is for example different auroral phenomena's. We will discuss some about the SIT units main functionality but this work is mainly concentrated on how to find auroral arcs and how they are placed in images. Special case have been taken to make the algorithm robust since it's going to be implemented in a SIT unit which will work automatic and often unsupervised and some extends control the data taking of ALIS. The method for finding auroral arcs is based on a local operator that detects intensity differens. This gives arc orientation values as a preprocessing which is fed to a neural network classifier. We will show some preliminary results and possibilities to use and improve this algorithm for use in the future SIT unit.

  20. A random forest classifier for lymph diseases.

    PubMed

    Azar, Ahmad Taher; Elshazly, Hanaa Ismail; Hassanien, Aboul Ella; Elkorany, Abeer Mohamed

    2014-02-01

    Machine learning-based classification techniques provide support for the decision-making process in many areas of health care, including diagnosis, prognosis, screening, etc. Feature selection (FS) is expected to improve classification performance, particularly in situations characterized by the high data dimensionality problem caused by relatively few training examples compared to a large number of measured features. In this paper, a random forest classifier (RFC) approach is proposed to diagnose lymph diseases. Focusing on feature selection, the first stage of the proposed system aims at constructing diverse feature selection algorithms such as genetic algorithm (GA), Principal Component Analysis (PCA), Relief-F, Fisher, Sequential Forward Floating Search (SFFS) and the Sequential Backward Floating Search (SBFS) for reducing the dimension of lymph diseases dataset. Switching from feature selection to model construction, in the second stage, the obtained feature subsets are fed into the RFC for efficient classification. It was observed that GA-RFC achieved the highest classification accuracy of 92.2%. The dimension of input feature space is reduced from eighteen to six features by using GA. PMID:24290902

  1. A pruned ensemble classifier for effective breast thermogram analysis.

    PubMed

    Krawczyk, Bartosz; Schaefer, Gerald

    2013-01-01

    Thermal infrared imaging has been shown to be useful for diagnosing breast cancer, since it is able to detect small tumors and hence can lead to earlier diagnosis. In this paper, we present a computer-aided diagnosis approach for analysing breast thermograms. We extract image features that describe bilateral differences of the breast regions in the thermogram, and then feed these features to an ensemble classifier. For the classification, we present an extension to the Under-Sampling Balanced Ensemble (USBE) algorithm. USBE addresses the problem of imbalanced class distribution that is common in medical decision making by training different classifiers on different subspaces, where each subspace is created so as to resemble a balanced classification problem. To combine the individual classifiers, we use a neural fuser based on discriminants and apply a classifier selection procedure based on a pairwise double-fault diversity measure to discard irrelevant and similar classifiers. We demonstrate that our approach works well, and that it statistically outperforms various other ensemble approaches including the original USBE algorithm. PMID:24111386

  2. Classifier-Guided Sampling for Complex Energy System Optimization

    SciTech Connect

    Backlund, Peter B.; Eddy, John P.

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  3. SAR terrain classifier and mapper of biophysical attributes

    NASA Technical Reports Server (NTRS)

    Ulaby, Fawwaz T.; Dobson, M. Craig; Pierce, Leland; Sarabandi, Kamal

    1993-01-01

    In preparation for the launch of SIR-C/X-SAR and design studies for future orbital SAR, a program has made considerable progress in the development of an SAR terrain classifier and algorithms for quantification of biophysical attributes. The goal of this program is to produce a generalized software package for terrain classification and estimation of biophysical attributes and to make this package available to the larger scientific community. The basic elements of the SAR (Synthetic Aperture Radar) terrain classifier are outlined. An SAR image is calibrated with respect to known system and processor gains and external targets (if available). A Level 1 classifier operates on the data to differentiate: urban features, surfaces and tall and short vegetation. Level 2 classifiers further subdivide these classes on the basis of structure. Finally, biophysical and geophysical inversions are applied to each class to estimate attributes of interest. The process used to develop the classifiers and inversions is shown. Radar scattering models developed from theory and from empirical data obtained by truck-mounted polarimeters and the JPL AirSAR are validated. The validated models are used in sensitivity studies to understand the roles of various scattering sources (i.e., surface trunk, branches, etc.) in determining net backscatter. Model simulations of sigma (sup o) as functions of the wave parameters (lambda, polarization and angle of incidence) and the geophysical and biophysical attributes are used to develop robust classifiers. The classifiers are validated using available AirSAR data sets. Specific estimators are developed for each class on the basis of the scattering models and empirical data sets. The candidate algorithms are tested with the AirSAR data sets. The attributes of interest include: total above ground biomass, woody biomass, soil moisture and soil roughness.

  4. Peteye detection and correction

    NASA Astrophysics Data System (ADS)

    Yen, Jonathan; Luo, Huitao; Tretter, Daniel

    2007-01-01

    Redeyes are caused by the camera flash light reflecting off the retina. Peteyes refer to similar artifacts in the eyes of other mammals caused by camera flash. In this paper we present a peteye removal algorithm for detecting and correcting peteye artifacts in digital images. Peteye removal for animals is significantly more difficult than redeye removal for humans, because peteyes can be any of a variety of colors, and human face detection cannot be used to localize the animal eyes. In many animals, including dogs and cats, the retina has a special reflective layer that can cause a variety of peteye colors, depending on the animal's breed, age, or fur color, etc. This makes the peteye correction more challenging. We have developed a semi-automatic algorithm for peteye removal that can detect peteyes based on the cursor position provided by the user and correct them by neutralizing the colors with glare reduction and glint retention.

  5. 32 CFR 1602.8 - Classifying authority.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 6 2014-07-01 2014-07-01 false Classifying authority. 1602.8 Section 1602.8....8 Classifying authority. The term classifying authority refers to any official or board who is authorized in § 1633.1 to classify a registrant....

  6. 10 CFR 25.35 - Classified visits.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Classified visits. 25.35 Section 25.35 Energy NUCLEAR REGULATORY COMMISSION ACCESS AUTHORIZATION Classified Visits § 25.35 Classified visits. (a) The number of classified visits must be held to a minimum. The licensee, certificate holder, applicant for a...

  7. 28 CFR 701.14 - Classified information.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Classified information. 701.14 Section... UNDER THE FREEDOM OF INFORMATION ACT § 701.14 Classified information. In processing a request for information that is classified or classifiable under Executive Order 12356 or any other Executive...

  8. 28 CFR 701.14 - Classified information.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Classified information. 701.14 Section... UNDER THE FREEDOM OF INFORMATION ACT § 701.14 Classified information. In processing a request for information that is classified or classifiable under Executive Order 12356 or any other Executive...

  9. 28 CFR 700.14 - Classified information.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Classified information. 700.14 Section... the Privacy Act of 1974 § 700.14 Classified information. In processing a request for access to a record containing information that is classified or classifiable under Executive Order 12356 or any...

  10. 32 CFR 1633.1 - Classifying authority.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classifying authority. 1633.1 Section 1633.1... CLASSIFICATION § 1633.1 Classifying authority. The following officials are authorized to classify registrants... Service may in accord with the provisions of this chapter classify a registrant into any class for...

  11. 32 CFR 1633.1 - Classifying authority.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 6 2014-07-01 2014-07-01 false Classifying authority. 1633.1 Section 1633.1... CLASSIFICATION § 1633.1 Classifying authority. The following officials are authorized to classify registrants... Service may in accord with the provisions of this chapter classify a registrant into any class for...

  12. 32 CFR 1602.8 - Classifying authority.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classifying authority. 1602.8 Section 1602.8....8 Classifying authority. The term classifying authority refers to any official or board who is authorized in § 1633.1 to classify a registrant....

  13. 32 CFR 1602.8 - Classifying authority.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 6 2012-07-01 2012-07-01 false Classifying authority. 1602.8 Section 1602.8....8 Classifying authority. The term classifying authority refers to any official or board who is authorized in § 1633.1 to classify a registrant....

  14. 14 CFR 1216.317 - Classified information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 5 2012-01-01 2012-01-01 false Classified information. 1216.317 Section... Classified information. Environmental assessments and impact statements which contain classified information... organized so that the classified portions are appendices to the environmental document itself....

  15. 32 CFR 1633.1 - Classifying authority.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 6 2011-07-01 2011-07-01 false Classifying authority. 1633.1 Section 1633.1... CLASSIFICATION § 1633.1 Classifying authority. The following officials are authorized to classify registrants... Service may in accord with the provisions of this chapter classify a registrant into any class for...

  16. 32 CFR 1602.8 - Classifying authority.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 6 2011-07-01 2011-07-01 false Classifying authority. 1602.8 Section 1602.8....8 Classifying authority. The term classifying authority refers to any official or board who is authorized in § 1633.1 to classify a registrant....

  17. 28 CFR 700.14 - Classified information.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Classified information. 700.14 Section... the Privacy Act of 1974 § 700.14 Classified information. In processing a request for access to a record containing information that is classified or classifiable under Executive Order 12356 or any...

  18. 28 CFR 701.14 - Classified information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Classified information. 701.14 Section... UNDER THE FREEDOM OF INFORMATION ACT § 701.14 Classified information. In processing a request for information that is classified or classifiable under Executive Order 12356 or any other Executive...

  19. 28 CFR 701.14 - Classified information.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Classified information. 701.14 Section... UNDER THE FREEDOM OF INFORMATION ACT § 701.14 Classified information. In processing a request for information that is classified or classifiable under Executive Order 12356 or any other Executive...

  20. 10 CFR 25.35 - Classified visits.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Classified visits. 25.35 Section 25.35 Energy NUCLEAR REGULATORY COMMISSION ACCESS AUTHORIZATION Classified Visits § 25.35 Classified visits. (a) The number of classified visits must be held to a minimum. The licensee, certificate holder, applicant for a...

  1. 28 CFR 700.14 - Classified information.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Classified information. 700.14 Section... the Privacy Act of 1974 § 700.14 Classified information. In processing a request for access to a record containing information that is classified or classifiable under Executive Order 12356 or any...

  2. 14 CFR 1216.317 - Classified information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 5 2011-01-01 2010-01-01 true Classified information. 1216.317 Section... Classified information. Environmental assessments and impact statements which contain classified information... organized so that the classified portions are appendices to the environmental document itself....

  3. 28 CFR 701.14 - Classified information.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Classified information. 701.14 Section... UNDER THE FREEDOM OF INFORMATION ACT § 701.14 Classified information. In processing a request for information that is classified or classifiable under Executive Order 12356 or any other Executive...

  4. 32 CFR 651.13 - Classified actions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 4 2014-07-01 2013-07-01 true Classified actions. 651.13 Section 651.13... § 651.13 Classified actions. (a) For proposed actions and NEPA analyses involving classified information... proposed action. (c) When classified information can be reasonably separated from other information and...

  5. 32 CFR 1602.8 - Classifying authority.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 6 2013-07-01 2013-07-01 false Classifying authority. 1602.8 Section 1602.8....8 Classifying authority. The term classifying authority refers to any official or board who is authorized in § 1633.1 to classify a registrant....

  6. 28 CFR 700.14 - Classified information.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Classified information. 700.14 Section... the Privacy Act of 1974 § 700.14 Classified information. In processing a request for access to a record containing information that is classified or classifiable under Executive Order 12356 or any...

  7. 10 CFR 25.35 - Classified visits.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 1 2013-01-01 2013-01-01 false Classified visits. 25.35 Section 25.35 Energy NUCLEAR REGULATORY COMMISSION ACCESS AUTHORIZATION Classified Visits § 25.35 Classified visits. (a) The number of classified visits must be held to a minimum. The licensee, certificate holder, applicant for a...

  8. 28 CFR 700.14 - Classified information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Classified information. 700.14 Section... the Privacy Act of 1974 § 700.14 Classified information. In processing a request for access to a record containing information that is classified or classifiable under Executive Order 12356 or any...

  9. 10 CFR 25.35 - Classified visits.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 1 2012-01-01 2012-01-01 false Classified visits. 25.35 Section 25.35 Energy NUCLEAR REGULATORY COMMISSION ACCESS AUTHORIZATION Classified Visits § 25.35 Classified visits. (a) The number of classified visits must be held to a minimum. The licensee, certificate holder, applicant for a...

  10. 32 CFR 1633.1 - Classifying authority.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 6 2013-07-01 2013-07-01 false Classifying authority. 1633.1 Section 1633.1... CLASSIFICATION § 1633.1 Classifying authority. The following officials are authorized to classify registrants... Service may in accord with the provisions of this chapter classify a registrant into any class for...

  11. 10 CFR 25.35 - Classified visits.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 1 2014-01-01 2014-01-01 false Classified visits. 25.35 Section 25.35 Energy NUCLEAR REGULATORY COMMISSION ACCESS AUTHORIZATION Classified Visits § 25.35 Classified visits. (a) The number of classified visits must be held to a minimum. The licensee, certificate holder, applicant for a...

  12. 14 CFR 1216.317 - Classified information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 5 2010-01-01 2010-01-01 false Classified information. 1216.317 Section... Classified information. Environmental assessments and impact statements which contain classified information... organized so that the classified portions are appendices to the environmental document itself....

  13. 32 CFR 1633.1 - Classifying authority.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 6 2012-07-01 2012-07-01 false Classifying authority. 1633.1 Section 1633.1... CLASSIFICATION § 1633.1 Classifying authority. The following officials are authorized to classify registrants... Service may in accord with the provisions of this chapter classify a registrant into any class for...

  14. Optimization of Raman-spectrum baseline correction in biological application.

    PubMed

    Guo, Shuxia; Bocklitz, Thomas; Popp, Jürgen

    2016-04-21

    In the last decade Raman-spectroscopy has become an invaluable tool for biomedical diagnostics. However, a manual rating of the subtle spectral differences between normal and abnormal disease states is not possible or practical. Thus it is necessary to combine Raman-spectroscopy with chemometrics in order to build statistical models predicting the disease states directly without manual intervention. Within chemometrical analysis a number of corrections have to be applied to receive robust models. Baseline correction is an important step of the pre-processing, which should remove spectral contributions of fluorescence effects and improve the performance and robustness of statistical models. However, it is demanding, time-consuming, and depends on expert knowledge to select an optimal baseline correction method and its parameters every time working with a new dataset. To circumvent this issue we proposed a genetic algorithm based method to automatically optimize the baseline correction. The investigation was carried out in three main steps. Firstly, a numerical quantitative marker was defined to evaluate the baseline estimation quality. Secondly, a genetic algorithm based methodology was established to search the optimal baseline estimation with the defined quantitative marker as evaluation function. Finally, classification models were utilized to benchmark the performance of the optimized baseline. For comparison, model based baseline optimization was carried out applying the same classifiers. It was proven that our method could provide a semi-optimal and stable baseline estimation without any chemical knowledge required or any additional spectral information used. PMID:26907832

  15. Markov Network-Based Unified Classifier for Face Recognition.

    PubMed

    Hwang, Wonjun; Kim, Junmo

    2015-11-01

    In this paper, we propose a novel unifying framework using a Markov network to learn the relationships among multiple classifiers. In face recognition, we assume that we have several complementary classifiers available, and assign observation nodes to the features of a query image and hidden nodes to those of gallery images. Under the Markov assumption, we connect each hidden node to its corresponding observation node and the hidden nodes of neighboring classifiers. For each observation-hidden node pair, we collect the set of gallery candidates most similar to the observation instance, and capture the relationship between the hidden nodes in terms of a similarity matrix among the retrieved gallery images. Posterior probabilities in the hidden nodes are computed using the belief propagation algorithm, and we use marginal probability as the new similarity value of the classifier. The novelty of our proposed framework lies in the method that considers classifier dependence using the results of each neighboring classifier. We present the extensive evaluation results for two different protocols, known and unknown image variation tests, using four publicly available databases: 1) the Face Recognition Grand Challenge ver. 2.0; 2) XM2VTS; 3) BANCA; and 4) Multi-PIE. The result shows that our framework consistently yields improved recognition rates in various situations. PMID:26219095

  16. CORDIC algorithms for SVM FPGA implementation

    NASA Astrophysics Data System (ADS)

    Gimeno Sarciada, Jesús; Lamel Rivera, Horacio; Jiménez, Matías

    2010-04-01

    Support Vector Machines are currently one of the best classification algorithms used in a wide number of applications. The ability to extract a classification function from a limited number of learning examples keeping in the structural risk low has demonstrated to be a clear alternative to other neural networks. However, the calculations involved in computing the kernel and the repetition of the process for all support vectors in the classification problem are certainly intensive, requiring time or power consumption in order to function correctly. This problem could be a drawback in certain applications with limited resources or time. Therefore simple algorithms circumventing this problem are needed. In this paper we analyze an FPGA implementation of a SVM which uses a CORDIC algorithm for simplifying the calculation of as specific kernel greatly reducing the time and hardware requirements needed for the classification, allowing for powerful in-field portable applications. The algorithm is and its calculation capabilities are shown. The full SVM classifier using this algorithm is implemented in an FPGA and its in-field use assessed for high speed low power classification.

  17. Application of bias correction methods to improve the accuracy of quantitative radar rainfall in Korea

    NASA Astrophysics Data System (ADS)

    Lee, J.-K.; Kim, J.-H.; Suk, M.-K.

    2015-11-01

    There are many potential sources of the biases in the radar rainfall estimation process. This study classified the biases from the rainfall estimation process into the reflectivity measurement bias and the rainfall estimation bias by the Quantitative Precipitation Estimation (QPE) model and also conducted the bias correction methods to improve the accuracy of the Radar-AWS Rainrate (RAR) calculation system operated by the Korea Meteorological Administration (KMA). In the Z bias correction for the reflectivity biases occurred by measuring the rainfalls, this study utilized the bias correction algorithm. The concept of this algorithm is that the reflectivity of the target single-pol radars is corrected based on the reference dual-pol radar corrected in the hardware and software bias. This study, and then, dealt with two post-process methods, the Mean Field Bias Correction (MFBC) method and the Local Gauge Correction method (LGC), to correct the rainfall estimation bias by the QPE model. The Z bias and rainfall estimation bias correction methods were applied to the RAR system. The accuracy of the RAR system was improved after correcting Z bias. For the rainfall types, although the accuracy of the Changma front and the local torrential cases was slightly improved without the Z bias correction the accuracy of the typhoon cases got worse than the existing results in particular. As a result of the rainfall estimation bias correction, the Z bias_LGC was especially superior to the MFBC method because the different rainfall biases were applied to each grid rainfall amount in the LGC method. For the rainfall types, the results of the Z bias_LGC showed that the rainfall estimates for all types was more accurate than only the Z bias and, especially, the outcomes in the typhoon cases was vastly superior to the others.

  18. An algorithm for temperature correcting substrate moisture measurements: aligning substrate moisture responses with environmental drivers in polytunnel-grown strawberry plants

    NASA Astrophysics Data System (ADS)

    Goodchild, Martin; Janes, Stuart; Jenkins, Malcolm; Nicholl, Chris; Kühn, Karl

    2015-04-01

    The aim of this work is to assess the use of temperature corrected substrate moisture data to improve the relationship between environmental drivers and the measurement of substrate moisture content in high porosity soil-free growing environments such as coir. Substrate moisture sensor data collected from strawberry plants grown in coir bags installed in a table-top system under a polytunnel illustrates the impact of temperature on capacitance-based moisture measurements. Substrate moisture measurements made in our coir arrangement possess the negative temperature coefficient of the permittivity of water where diurnal changes in moisture content oppose those of substrate temperature. The diurnal substrate temperature variation was seen to range from 7° C to 25° C resulting in a clearly observable temperature effect in substrate moisture content measurements during the 23 day test period. In the laboratory we measured the ML3 soil moisture sensor (ThetaProbe) response to temperature in Air, dry glass beads and water saturated glass beads and used a three-phase alpha (α) mixing model, also known as the Complex Refractive Index Model (CRIM), to derive the permittivity temperature coefficients for glass and water. We derived the α value and estimated the temperature coefficient for water - for sensors operating at 100MHz. Both results are good agreement with published data. By applying the CRIM equation with the temperature coefficients of glass and water the moisture temperature coefficient of saturated glass beads has been reduced by more than an order of magnitude to a moisture temperature coefficient of

  19. A probabilistic multi-class classifier for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Mechbal, Nazih; Uribe, Juan Sebastian; Rébillat, Marc

    2015-08-01

    In this paper, a probabilistic multi-class pattern recognition algorithm is developed for damage monitoring of smart structures. As these structures can face damages of different severities located in various positions, multi-class classifiers are needed. We propose an original support vector machine (SVM) multi-class clustering algorithm that is based on a probabilistic decision tree (PDT) that produces a posteriori probabilities associated with damage existence, location and severity. The PDT is built by iteratively subdividing the surface and thus takes into account the structure geometry. The proposed algorithm is very appealing as it combines both the computational efficiency of tree architectures and the SVMs classification accuracy. Damage sensitive features are computed using an active approach based on the permanent emission of non-resonant Lamb waves into the structure and on the recognition of amplitude disturbed diffraction patterns. The effectiveness of this algorithm is illustrated experimentally on a composite plate instrumented with piezoelectric elements.

  20. Classifier dependent feature preprocessing methods

    NASA Astrophysics Data System (ADS)

    Rodriguez, Benjamin M., II; Peterson, Gilbert L.

    2008-04-01

    In mobile applications, computational complexity is an issue that limits sophisticated algorithms from being implemented on these devices. This paper provides an initial solution to applying pattern recognition systems on mobile devices by combining existing preprocessing algorithms for recognition. In pattern recognition systems, it is essential to properly apply feature preprocessing tools prior to training classification models in an attempt to reduce computational complexity and improve the overall classification accuracy. The feature preprocessing tools extended for the mobile environment are feature ranking, feature extraction, data preparation and outlier removal. Most desktop systems today are capable of processing a majority of the available classification algorithms without concern of processing while the same is not true on mobile platforms. As an application of pattern recognition for mobile devices, the recognition system targets the problem of steganalysis, determining if an image contains hidden information. The measure of performance shows that feature preprocessing increases the overall steganalysis classification accuracy by an average of 22%. The methods in this paper are tested on a workstation and a Nokia 6620 (Symbian operating system) camera phone with similar results.

  1. Support vector machines classifiers of physical activities in preschoolers

    PubMed Central

    Zhao, Wei; Adolph, Anne L; Puyau, Maurice R; Vohra, Firoz A; Butte, Nancy F; Zakeri, Issa F

    2013-01-01

    The goal of this study is to develop, test, and compare multinomial logistic regression (MLR) and support vector machines (SVM) in classifying preschool-aged children physical activity data acquired from an accelerometer. In this study, 69 children aged 3–5 years old were asked to participate in a supervised protocol of physical activities while wearing a triaxial accelerometer. Accelerometer counts, steps, and position were obtained from the device. We applied K-means clustering to determine the number of natural groupings presented by the data. We used MLR and SVM to classify the six activity types. Using direct observation as the criterion method, the 10-fold cross-validation (CV) error rate was used to compare MLR and SVM classifiers, with and without sleep. Altogether, 58 classification models based on combinations of the accelerometer output variables were developed. In general, the SVM classifiers have a smaller 10-fold CV error rate than their MLR counterparts. Including sleep, a SVM classifier provided the best performance with a 10-fold CV error rate of 24.70%. Without sleep, a SVM classifier-based triaxial accelerometer counts, vector magnitude, steps, position, and 1- and 2-min lag and lead values achieved a 10-fold CV error rate of 20.16% and an overall classification error rate of 15.56%. SVM supersedes the classical classifier MLR in categorizing physical activities in preschool-aged children. Using accelerometer data, SVM can be used to correctly classify physical activities typical of preschool-aged children with an acceptable classification error rate. PMID:24303099

  2. Parallel processing implementations of a contextual classifier for multispectral remote sensing data

    NASA Technical Reports Server (NTRS)

    Siegel, H. J.; Swain, P. H.; Smith, B. W.

    1980-01-01

    Contextual classifiers are being developed as a method to exploit the spatial/spectral context of a pixel to achieve accurate classification. Classification algorithms such as the contextual classifier typically require large amounts of computation time. One way to reduce the execution time of these tasks is through the use of parallelism. The applicability of the CDC flexible processor system and of a proposed multimicroprocessor system (PASM) for implementing contextual classifiers is examined.

  3. Comparing Different Classifiers in Sensory Motor Brain Computer Interfaces

    PubMed Central

    Bashashati, Hossein; Ward, Rabab K.; Birch, Gary E.; Bashashati, Ali

    2015-01-01

    A problem that impedes the progress in Brain-Computer Interface (BCI) research is the difficulty in reproducing the results of different papers. Comparing different algorithms at present is very difficult. Some improvements have been made by the use of standard datasets to evaluate different algorithms. However, the lack of a comparison framework still exists. In this paper, we construct a new general comparison framework to compare different algorithms on several standard datasets. All these datasets correspond to sensory motor BCIs, and are obtained from 21 subjects during their operation of synchronous BCIs and 8 subjects using self-paced BCIs. Other researchers can use our framework to compare their own algorithms on their own datasets. We have compared the performance of different popular classification algorithms over these 29 subjects and performed statistical tests to validate our results. Our findings suggest that, for a given subject, the choice of the classifier for a BCI system depends on the feature extraction method used in that BCI system. This is in contrary to most of publications in the field that have used Linear Discriminant Analysis (LDA) as the classifier of choice for BCI systems. PMID:26090799

  4. Fibonacci Numbers and Computer Algorithms.

    ERIC Educational Resources Information Center

    Atkins, John; Geist, Robert

    1987-01-01

    The Fibonacci Sequence describes a vast array of phenomena from nature. Computer scientists have discovered and used many algorithms which can be classified as applications of Fibonacci's sequence. In this article, several of these applications are considered. (PK)

  5. Local feature saliency classifier for real-time intrusion monitoring

    NASA Astrophysics Data System (ADS)

    Buch, Norbert; Velastin, Sergio A.

    2014-07-01

    We propose a texture saliency classifier to detect people in a video frame by identifying salient texture regions. The image is classified into foreground and background in real time. No temporal image information is used during the classification. The system is used for the task of detecting people entering a sterile zone, which is a common scenario for visual surveillance. Testing is performed on the Imagery Library for Intelligent Detection Systems sterile zone benchmark dataset of the United Kingdom's Home Office. The basic classifier is extended by fusing its output with simple motion information, which significantly outperforms standard motion tracking. A lower detection time can be achieved by combining texture classification with Kalman filtering. The fusion approach running at 10 fps gives the highest result of F1=0.92 for the 24-h test dataset. The paper concludes with a detailed analysis of the computation time required for the different parts of the algorithm.

  6. Autonomous, rapid classifiers for hyperspectral imagers

    NASA Astrophysics Data System (ADS)

    Gilmore, M. S.; Bornstein, B.; Castano, R.; Greenwood, J.

    2006-05-01

    Hyperspectral systems collect huge volumes of multidimensional data that require time consuming, expert analysis. The data analysis costs of global datasets restrict rapid classification to only a subset of an entire mission dataset, reducing mission science return. Data downlink restrictions from planetary missions also highlight the need for robust mineral detection algorithms. For example, both OMEGA and CRISM will map only approximately 5% of the Mars surface at full spatial and spectral resolution. While some targets are preselected for full resolution study, other high priority targets on Mars will be selected in response to observations made by the instruments in a multispectral survey mode. The challenge is to create mineral detection algorithms that can be utilized to analyze any and all image cubes (x, y, λ) for a selected system to help ensure that priority targets are not overlooked in these datasets. This goal is critical both for onboard, real time processing to direct target acquisition and for the mining of returned data. While an ultimate goal would be to accurately classify the composition of every pixel on a planet's surface, this is made difficult by the fact that most pixels are complex mixtures of n materials, which may or may not be represented in library (training) data. We instead focus on the identification of specific important mineral compositions within pixels in the data. For Mars, high priority targets include minerals associated with the presence of water. We have developed highly accurate artificial neural network (ANN) and Support Vector Machine (SVM) based detectors capable of identifying calcite (CaCO3) and jarosite (KFe3(SO4)2(OH)6) in the visible/NIR (350 to 2500 nm) spectra of both laboratory specimens and rocks in Mars analogue field environments. The detectors are trained using a generative model to create 1000s of linear mixtures of library end-member spectra in geologically realistic percentages. Here we will discuss

  7. Classifying sex biased congenital anomalies

    SciTech Connect

    Lubinsky, M.S.

    1997-03-31

    The reasons for sex biases in congenital anomalies that arise before structural or hormonal dimorphisms are established has long been unclear. A review of such disorders shows that patterning and tissue anomalies are female biased, and structural findings are more common in males. This suggests different gender dependent susceptibilities to developmental disturbances, with female vulnerabilities focused on early blastogenesis/determination, while males are more likely to involve later organogenesis/morphogenesis. A dual origin for some anomalies explains paradoxical reductions of sex biases with greater severity (i.e., multiple rather than single malformations), presumably as more severe events increase the involvement of an otherwise minor process with opposite biases to those of the primary mechanism. The cause for these sex differences is unknown, but early dimorphisms, such as differences in growth or presence of H-Y antigen, may be responsible. This model provides a useful rationale for understanding and classifying sex-biased congenital anomalies. 42 refs., 7 tabs.

  8. A Hybrid Generative/Discriminative Classifier Design for Semi-supervised Learing

    NASA Astrophysics Data System (ADS)

    Fujino, Akinori; Ueda, Naonori; Saito, Kazumi

    Semi-supervised classifier design that simultaneously utilizes both a small number of labeled samples and a large number of unlabeled samples is a major research issue in machine learning. Existing semi-supervised learning methods for probabilistic classifiers belong to either generative or discriminative approaches. This paper focuses on a semi-supervised probabilistic classifier design for multiclass and single-labeled classification problems and first presents a hybrid approach to take advantage of the generative and discriminative approaches. Our formulation considers a generative model trained on labeled samples and a newly introduced bias correction model, whose belongs to the same model family as the generative model, but whose parameters are different from the generative model. A hybrid classifier is constructed by combining both the generative and bias correction models based on the maximum entropy principle, where the combination weights of these models are determined so that the class labels of labeled samples are as correctly predicted as possible. We apply the hybrid approach to text classification problems by employing naive Bayes as the generative and bias correction models. In our experimental results on three English and one Japanese text data sets, we confirmed that the hybrid classifier significantly outperformed conventional probabilistic generative and discriminative classifiers when the classification performance of the generative classifier was comparable to the discriminative classifier.

  9. Multiple Classifier System for Remote Sensing Image Classification: A Review

    PubMed Central

    Du, Peijun; Xia, Junshi; Zhang, Wei; Tan, Kun; Liu, Yi; Liu, Sicong

    2012-01-01

    Over the last two decades, multiple classifier system (MCS) or classifier ensemble has shown great potential to improve the accuracy and reliability of remote sensing image classification. Although there are lots of literatures covering the MCS approaches, there is a lack of a comprehensive literature review which presents an overall architecture of the basic principles and trends behind the design of remote sensing classifier ensemble. Therefore, in order to give a reference point for MCS approaches, this paper attempts to explicitly review the remote sensing implementations of MCS and proposes some modified approaches. The effectiveness of existing and improved algorithms are analyzed and evaluated by multi-source remotely sensed images, including high spatial resolution image (QuickBird), hyperspectral image (OMISII) and multi-spectral image (Landsat ETM+). Experimental results demonstrate that MCS can effectively improve the accuracy and stability of remote sensing image classification, and diversity measures play an active role for the combination of multiple classifiers. Furthermore, this survey provides a roadmap to guide future research, algorithm enhancement and facilitate knowledge accumulation of MCS in remote sensing community. PMID:22666057

  10. Classifying adolescent attention-deficit/hyperactivity disorder (ADHD) based on functional and structural imaging.

    PubMed

    Iannaccone, Reto; Hauser, Tobias U; Ball, Juliane; Brandeis, Daniel; Walitza, Susanne; Brem, Silvia

    2015-10-01

    Attention-deficit/hyperactivity disorder (ADHD) is a common disabling psychiatric disorder associated with consistent deficits in error processing, inhibition and regionally decreased grey matter volumes. The diagnosis is based on clinical presentation, interviews and questionnaires, which are to some degree subjective and would benefit from verification through biomarkers. Here, pattern recognition of multiple discriminative functional and structural brain patterns was applied to classify adolescents with ADHD and controls. Functional activation features in a Flanker/NoGo task probing error processing and inhibition along with structural magnetic resonance imaging data served to predict group membership using support vector machines (SVMs). The SVM pattern recognition algorithm correctly classified 77.78% of the subjects with a sensitivity and specificity of 77.78% based on error processing. Predictive regions for controls were mainly detected in core areas for error processing and attention such as the medial and dorsolateral frontal areas reflecting deficient processing in ADHD (Hart et al., in Hum Brain Mapp 35:3083-3094, 2014), and overlapped with decreased activations in patients in conventional group comparisons. Regions more predictive for ADHD patients were identified in the posterior cingulate, temporal and occipital cortex. Interestingly despite pronounced univariate group differences in inhibition-related activation and grey matter volumes the corresponding classifiers failed or only yielded a poor discrimination. The present study corroborates the potential of task-related brain activation for classification shown in previous studies. It remains to be clarified whether error processing, which performed best here, also contributes to the discrimination of useful dimensions and subtypes, different psychiatric disorders, and prediction of treatment success across studies and sites. PMID:25613588

  11. An algorithm to discover gene signatures with predictive potential

    PubMed Central

    2010-01-01

    Background The advent of global gene expression profiling has generated unprecedented insight into our molecular understanding of cancer, including breast cancer. For example, human breast cancer patients display significant diversity in terms of their survival, recurrence, metastasis as well as response to treatment. These patient outcomes can be predicted by the transcriptional programs of their individual breast tumors. Predictive gene signatures allow us to correctly classify human breast tumors into various risk groups as well as to more accurately target therapy to ensure more durable cancer treatment. Results Here we present a novel algorithm to generate gene signatures with predictive potential. The method first classifies the expression intensity for each gene as determined by global gene expression profiling as low, average or high. The matrix containing the classified data for each gene is then used to score the expression of each gene based its individual ability to predict the patient characteristic of interest. Finally, all examined genes are ranked based on their predictive ability and the most highly ranked genes are included in the master gene signature, which is then ready for use as a predictor. This method was used to accurately predict the survival outcomes in a cohort of human breast cancer patients. Conclusions We confirmed the capacity of our algorithm to generate gene signatures with bona fide predictive ability. The simplicity of our algorithm will enable biological researchers to quickly generate valuable gene signatures without specialized software or extensive bioinformatics training. PMID:20813028

  12. Evolution of a computer program for classifying protein segments as transmembrane domains using genetic programming.

    PubMed

    Koza, J R

    1994-01-01

    The recently-developed genetic programming paradigm is used to evolve a computer program to classify a given protein segment as being a transmembrane domain or non-transmembrane area of the protein. Genetic programming starts with a primordial ooze of randomly generated computer programs composed of available programmatic ingredients and then genetically breeds the population of programs using the Darwinian principle of survival of the fittest and an analog of the naturally occurring genetic operation of crossover (sexual recombination). Automatic function definition enables genetic programming to dynamically create subroutines dynamically during the run. Genetic programming is given a training set of differently-sized protein segments and their correct classification (but no biochemical knowledge, such as hydrophobicity values). Correlation is used as the fitness measure to drive the evolutionary process. The best genetically-evolved program achieves an out-of-sample correlation of 0.968 and an out-of-sample error rate of 1.6%. This error rate is better than that reported for four other algorithms reported at the First International Conference on Intelligent Systems for Molecular Biology. Our genetically evolved program is an instance of an algorithm discovered by an automated learning paradigm that is superior to that written by human investigators. PMID:7584397

  13. A mixture model with random-effects components for classifying sibling pairs.

    PubMed

    Martella, F; Vermunt, J K; Beekman, M; Westendorp, R G J; Slagboom, P E; Houwing-Duistermaat, J J

    2011-11-30

    In healthy aging research, typically multiple health outcomes are measured, representing health status. The aim of this paper was to develop a model-based clustering approach to identify homogeneous sibling pairs according to their health status. Model-based clustering approaches will be considered on the basis of linear mixed effect model for the mixture components. Class memberships of siblings within pairs are allowed to be correlated, and within a class the correlation between siblings is modeled using random sibling pair effects. We propose an expectation-maximization algorithm for maximum likelihood estimation. Model performance is evaluated via simulations in terms of estimating the correct parameters, degree of agreement, and the ability to detect the correct number of clusters. The performance of our model is compared with the performance of standard model-based clustering approaches. The methods are used to classify sibling pairs from the Leiden Longevity Study according to their health status. Our results suggest that homogeneous healthy sibling pairs are associated with a longer life span. Software is available for fitting the new models. PMID:21905068

  14. Monocular precrash vehicle detection: features and classifiers.

    PubMed

    Sun, Zehang; Bebis, George; Miller, Ronald

    2006-07-01

    Robust and reliable vehicle detection from images acquired by a moving vehicle (i.e., on-road vehicle detection) is an important problem with applications to driver assistance systems and autonomous, self-guided vehicles. The focus of this work is on the issues of feature extraction and classification for rear-view vehicle detection. Specifically, by treating the problem of vehicle detection as a two-class classification problem, we have investigated several different feature extraction methods such as principal component analysis, wavelets, and Gabor filters. To evaluate the extracted features, we have experimented with two popular classifiers, neural networks and support vector machines (SVMs). Based on our evaluation results, we have developed an on-board real-time monocular vehicle detection system that is capable of acquiring grey-scale images, using Ford's proprietary low-light camera, achieving an average detection rate of 10 Hz. Our vehicle detection algorithm consists of two main steps: a multiscale driven hypothesis generation step and an appearance-based hypothesis verification step. During the hypothesis generation step, image locations where vehicles might be present are extracted. This step uses multiscale techniques not only to speed up detection, but also to improve system robustness. The appearance-based hypothesis verification step verifies the hypotheses using Gabor features and SVMs. The system has been tested in Ford's concept vehicle under different traffic conditions (e.g., structured highway, complex urban streets, and varying weather conditions), illustrating good performance. PMID:16830921

  15. Neural network classifier of attacks in IP telephony

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Voznak, Miroslav; Mehic, Miralem; Partila, Pavol; Mikulec, Martin

    2014-05-01

    Various types of monitoring mechanism allow us to detect and monitor behavior of attackers in VoIP networks. Analysis of detected malicious traffic is crucial for further investigation and hardening the network. This analysis is typically based on statistical methods and the article brings a solution based on neural network. The proposed algorithm is used as a classifier of attacks in a distributed monitoring network of independent honeypot probes. Information about attacks on these honeypots is collected on a centralized server and then classified. This classification is based on different mechanisms. One of them is based on the multilayer perceptron neural network. The article describes inner structure of used neural network and also information about implementation of this network. The learning set for this neural network is based on real attack data collected from IP telephony honeypot called Dionaea. We prepare the learning set from real attack data after collecting, cleaning and aggregation of this information. After proper learning is the neural network capable to classify 6 types of most commonly used VoIP attacks. Using neural network classifier brings more accurate attack classification in a distributed system of honeypots. With this approach is possible to detect malicious behavior in a different part of networks, which are logically or geographically divided and use the information from one network to harden security in other networks. Centralized server for distributed set of nodes serves not only as a collector and classifier of attack data, but also as a mechanism for generating a precaution steps against attacks.

  16. Cellular phone enabled non-invasive tissue classifier.

    PubMed

    Laufer, Shlomi; Rubinsky, Boris

    2009-01-01

    Cellular phone technology is emerging as an important tool in the effort to provide advanced medical care to the majority of the world population currently without access to such care. In this study, we show that non-invasive electrical measurements and the use of classifier software can be combined with cellular phone technology to produce inexpensive tissue characterization. This concept was demonstrated by the use of a Support Vector Machine (SVM) classifier to distinguish through the cellular phone between heart and kidney tissue via the non-invasive multi-frequency electrical measurements acquired around the tissues. After the measurements were performed at a remote site, the raw data were transmitted through the cellular phone to a central computational site and the classifier was applied to the raw data. The results of the tissue analysis were returned to the remote data measurement site. The classifiers correctly determined the tissue type with a specificity of over 90%. When used for the detection of malignant tumors, classifiers can be designed to produce false positives in order to ensure that no tumors will be missed. This mode of operation has applications in remote non-invasive tissue diagnostics in situ in the body, in combination with medical imaging, as well as in remote diagnostics of biopsy samples in vitro. PMID:19365554

  17. Hybrid k -Nearest Neighbor Classifier.

    PubMed

    Yu, Zhiwen; Chen, Hantao; Liuxs, Jiming; You, Jane; Leung, Hareton; Han, Guoqiang

    2016-06-01

    Conventional k -nearest neighbor (KNN) classification approaches have several limitations when dealing with some problems caused by the special datasets, such as the sparse problem, the imbalance problem, and the noise problem. In this paper, we first perform a brief survey on the recent progress of the KNN classification approaches. Then, the hybrid KNN (HBKNN) classification approach, which takes into account the local and global information of the query sample, is designed to address the problems raised from the special datasets. In the following, the random subspace ensemble framework based on HBKNN (RS-HBKNN) classifier is proposed to perform classification on the datasets with noisy attributes in the high-dimensional space. Finally, the nonparametric tests are proposed to be adopted to compare the proposed method with other classification approaches over multiple datasets. The experiments on the real-world datasets from the Knowledge Extraction based on Evolutionary Learning dataset repository demonstrate that RS-HBKNN works well on real datasets, and outperforms most of the state-of-the-art classification approaches. PMID:26126291

  18. 28 CFR 61.8 - Classified proposals.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Classified proposals. 61.8 Section 61.8... ENVIRONMENTAL POLICY ACT Implementing Procedures § 61.8 Classified proposals. If an environmental document includes classified matter, a version containing only unclassified material shall be prepared unless...

  19. 15 CFR 4.8 - Classified Information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 15 Commerce and Foreign Trade 1 2012-01-01 2012-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information classified under Executive Order 12958 or any other executive order concerning the classification of...

  20. 6 CFR 5.7 - Classified information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Freedom of Information Act § 5.7 Classified information. In processing a request for information that is classified under Executive Order 12958 (3 CFR, 1996 Comp., p. 333) or any other executive order, the... 6 Domestic Security 1 2011-01-01 2011-01-01 false Classified information. 5.7 Section 5.7...

  1. 15 CFR 4.8 - Classified Information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 15 Commerce and Foreign Trade 1 2013-01-01 2013-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information classified under Executive Order 12958 or any other executive order concerning the classification of...

  2. 14 CFR 1216.310 - Classified actions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 5 2013-01-01 2013-01-01 false Classified actions. 1216.310 Section 1216... 1216.3 Procedures for Implementing the National Environmental Policy Act (NEPA) § 1216.310 Classified... environmental impacts of a proposed action. (b) When classified information can reasonably be separated...

  3. 6 CFR 5.24 - Classified information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 6 Domestic Security 1 2011-01-01 2011-01-01 false Classified information. 5.24 Section 5.24... INFORMATION Privacy Act § 5.24 Classified information. In processing a request for access to a record containing information that is classified under Executive Order 12958 or any other executive order,...

  4. 32 CFR 775.5 - Classified actions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 5 2012-07-01 2012-07-01 false Classified actions. 775.5 Section 775.5 National... IMPLEMENTING THE NATIONAL ENVIRONMENTAL POLICY ACT § 775.5 Classified actions. (a) The fact that a proposed action is of a classified nature does not relieve the proponent of the action from complying with...

  5. 28 CFR 61.8 - Classified proposals.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Classified proposals. 61.8 Section 61.8... ENVIRONMENTAL POLICY ACT Implementing Procedures § 61.8 Classified proposals. If an environmental document includes classified matter, a version containing only unclassified material shall be prepared unless...

  6. 6 CFR 5.7 - Classified information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Freedom of Information Act § 5.7 Classified information. In processing a request for information that is classified under Executive Order 12958 (3 CFR, 1996 Comp., p. 333) or any other executive order, the... 6 Domestic Security 1 2014-01-01 2014-01-01 false Classified information. 5.7 Section 5.7...

  7. 12 CFR 1301.9 - Classified information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 10 2014-01-01 2014-01-01 false Classified information. 1301.9 Section 1301.9 Banks and Banking FINANCIAL STABILITY OVERSIGHT COUNCIL FREEDOM OF INFORMATION § 1301.9 Classified information. (a) Referrals of requests for classified information. Whenever a request is made for a...

  8. 6 CFR 5.24 - Classified information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 6 Domestic Security 1 2012-01-01 2012-01-01 false Classified information. 5.24 Section 5.24... INFORMATION Privacy Act § 5.24 Classified information. In processing a request for access to a record containing information that is classified under Executive Order 12958 or any other executive order,...

  9. 28 CFR 16.44 - Classified information.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 1 2011-07-01 2011-07-01 false Classified information. 16.44 Section 16... Protection of Privacy and Access to Individual Records Under the Privacy Act of 1974 § 16.44 Classified information. In processing a request for access to a record containing information that is classified...

  10. 32 CFR 775.5 - Classified actions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 5 2013-07-01 2013-07-01 false Classified actions. 775.5 Section 775.5 National... IMPLEMENTING THE NATIONAL ENVIRONMENTAL POLICY ACT § 775.5 Classified actions. (a) The fact that a proposed action is of a classified nature does not relieve the proponent of the action from complying with...

  11. 32 CFR 148.2 - Classified programs.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 1 2014-07-01 2014-07-01 false Classified programs. 148.2 Section 148.2... Inspections of Facilities § 148.2 Classified programs. Once a facility is authorized, approved, certified, or accredited, all U.S. Government organizations desiring to conduct classified programs at the facility at...

  12. 12 CFR 1070.19 - Classified information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 8 2013-01-01 2013-01-01 false Classified information. 1070.19 Section 1070.19... of Information Act § 1070.19 Classified information. Whenever a request is made for a record containing information that another agency has classified, or which may be appropriate for classification...

  13. 6 CFR 5.24 - Classified information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 6 Domestic Security 1 2013-01-01 2013-01-01 false Classified information. 5.24 Section 5.24... INFORMATION Privacy Act § 5.24 Classified information. In processing a request for access to a record containing information that is classified under Executive Order 12958 or any other executive order,...

  14. 6 CFR 5.7 - Classified information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Freedom of Information Act § 5.7 Classified information. In processing a request for information that is classified under Executive Order 12958 (3 CFR, 1996 Comp., p. 333) or any other executive order, the... 6 Domestic Security 1 2013-01-01 2013-01-01 false Classified information. 5.7 Section 5.7...

  15. 32 CFR 775.5 - Classified actions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 5 2014-07-01 2014-07-01 false Classified actions. 775.5 Section 775.5 National... IMPLEMENTING THE NATIONAL ENVIRONMENTAL POLICY ACT § 775.5 Classified actions. (a) The fact that a proposed action is of a classified nature does not relieve the proponent of the action from complying with...

  16. 28 CFR 16.44 - Classified information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 1 2013-07-01 2013-07-01 false Classified information. 16.44 Section 16... Protection of Privacy and Access to Individual Records Under the Privacy Act of 1974 § 16.44 Classified information. In processing a request for access to a record containing information that is classified...

  17. 28 CFR 61.8 - Classified proposals.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Classified proposals. 61.8 Section 61.8... ENVIRONMENTAL POLICY ACT Implementing Procedures § 61.8 Classified proposals. If an environmental document includes classified matter, a version containing only unclassified material shall be prepared unless...

  18. 15 CFR 4.8 - Classified Information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 15 Commerce and Foreign Trade 1 2014-01-01 2014-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information classified under Executive Order 12958 or any other executive order concerning the classification of...

  19. 28 CFR 16.44 - Classified information.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 28 Judicial Administration 1 2014-07-01 2014-07-01 false Classified information. 16.44 Section 16... Protection of Privacy and Access to Individual Records Under the Privacy Act of 1974 § 16.44 Classified information. In processing a request for access to a record containing information that is classified...

  20. 28 CFR 61.8 - Classified proposals.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Classified proposals. 61.8 Section 61.8... ENVIRONMENTAL POLICY ACT Implementing Procedures § 61.8 Classified proposals. If an environmental document includes classified matter, a version containing only unclassified material shall be prepared unless...

  1. 28 CFR 16.7 - Classified information.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Procedures for Disclosure of Records Under the Freedom of Information Act § 16.7 Classified information. In processing a request for information that is classified under Executive Order 12958 (3 CFR, 1996 Comp., p... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Classified information. 16.7 Section...

  2. 32 CFR 148.2 - Classified programs.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 1 2010-07-01 2010-07-01 false Classified programs. 148.2 Section 148.2... Inspections of Facilities § 148.2 Classified programs. Once a facility is authorized, approved, certified, or accredited, all U.S. Government organizations desiring to conduct classified programs at the facility at...

  3. 15 CFR 4.8 - Classified Information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 15 Commerce and Foreign Trade 1 2011-01-01 2011-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information classified under Executive Order 12958 or any other executive order concerning the classification of...

  4. 6 CFR 5.7 - Classified information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Freedom of Information Act § 5.7 Classified information. In processing a request for information that is classified under Executive Order 12958 (3 CFR, 1996 Comp., p. 333) or any other executive order, the... 6 Domestic Security 1 2012-01-01 2012-01-01 false Classified information. 5.7 Section 5.7...

  5. 6 CFR 5.24 - Classified information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 6 Domestic Security 1 2010-01-01 2010-01-01 false Classified information. 5.24 Section 5.24... INFORMATION Privacy Act § 5.24 Classified information. In processing a request for access to a record containing information that is classified under Executive Order 12958 or any other executive order,...

  6. 32 CFR 148.2 - Classified programs.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 1 2011-07-01 2011-07-01 false Classified programs. 148.2 Section 148.2... Inspections of Facilities § 148.2 Classified programs. Once a facility is authorized, approved, certified, or accredited, all U.S. Government organizations desiring to conduct classified programs at the facility at...

  7. 12 CFR 1070.19 - Classified information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 12 Banks and Banking 8 2012-01-01 2012-01-01 false Classified information. 1070.19 Section 1070.19... of Information Act § 1070.19 Classified information. Whenever a request is made for a record containing information that another agency has classified, or which may be appropriate for classification...

  8. 28 CFR 16.44 - Classified information.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 1 2010-07-01 2010-07-01 false Classified information. 16.44 Section 16... Protection of Privacy and Access to Individual Records Under the Privacy Act of 1974 § 16.44 Classified information. In processing a request for access to a record containing information that is classified...

  9. 28 CFR 16.44 - Classified information.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 1 2012-07-01 2012-07-01 false Classified information. 16.44 Section 16... Protection of Privacy and Access to Individual Records Under the Privacy Act of 1974 § 16.44 Classified information. In processing a request for access to a record containing information that is classified...

  10. 12 CFR 1301.9 - Classified information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 12 Banks and Banking 9 2013-01-01 2013-01-01 false Classified information. 1301.9 Section 1301.9 Banks and Banking FINANCIAL STABILITY OVERSIGHT COUNCIL FREEDOM OF INFORMATION § 1301.9 Classified information. (a) Referrals of requests for classified information. Whenever a request is made for a...

  11. 6 CFR 5.24 - Classified information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 6 Domestic Security 1 2014-01-01 2014-01-01 false Classified information. 5.24 Section 5.24... INFORMATION Privacy Act § 5.24 Classified information. In processing a request for access to a record containing information that is classified under Executive Order 12958 or any other executive order,...

  12. 32 CFR 148.2 - Classified programs.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 32 National Defense 1 2013-07-01 2013-07-01 false Classified programs. 148.2 Section 148.2... Inspections of Facilities § 148.2 Classified programs. Once a facility is authorized, approved, certified, or accredited, all U.S. Government organizations desiring to conduct classified programs at the facility at...

  13. 28 CFR 16.7 - Classified information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Procedures for Disclosure of Records Under the Freedom of Information Act § 16.7 Classified information. In processing a request for information that is classified under Executive Order 12958 (3 CFR, 1996 Comp., p... 28 Judicial Administration 1 2013-07-01 2013-07-01 false Classified information. 16.7 Section...

  14. 32 CFR 775.5 - Classified actions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 5 2011-07-01 2011-07-01 false Classified actions. 775.5 Section 775.5 National... IMPLEMENTING THE NATIONAL ENVIRONMENTAL POLICY ACT § 775.5 Classified actions. (a) The fact that a proposed action is of a classified nature does not relieve the proponent of the action from complying with...

  15. 28 CFR 61.8 - Classified proposals.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Classified proposals. 61.8 Section 61.8... ENVIRONMENTAL POLICY ACT Implementing Procedures § 61.8 Classified proposals. If an environmental document includes classified matter, a version containing only unclassified material shall be prepared unless...

  16. 12 CFR 1070.19 - Classified information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 9 2014-01-01 2014-01-01 false Classified information. 1070.19 Section 1070.19... of Information Act § 1070.19 Classified information. Whenever a request is made for a record containing information that another agency has classified, or which may be appropriate for classification...

  17. 28 CFR 16.7 - Classified information.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Procedures for Disclosure of Records Under the Freedom of Information Act § 16.7 Classified information. In processing a request for information that is classified under Executive Order 12958 (3 CFR, 1996 Comp., p... 28 Judicial Administration 1 2011-07-01 2011-07-01 false Classified information. 16.7 Section...

  18. 28 CFR 16.7 - Classified information.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Procedures for Disclosure of Records Under the Freedom of Information Act § 16.7 Classified information. In processing a request for information that is classified under Executive Order 12958 (3 CFR, 1996 Comp., p... 28 Judicial Administration 1 2014-07-01 2014-07-01 false Classified information. 16.7 Section...

  19. 28 CFR 16.7 - Classified information.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Procedures for Disclosure of Records Under the Freedom of Information Act § 16.7 Classified information. In processing a request for information that is classified under Executive Order 12958 (3 CFR, 1996 Comp., p... 28 Judicial Administration 1 2012-07-01 2012-07-01 false Classified information. 16.7 Section...

  20. 32 CFR 148.2 - Classified programs.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 1 2012-07-01 2012-07-01 false Classified programs. 148.2 Section 148.2... Inspections of Facilities § 148.2 Classified programs. Once a facility is authorized, approved, certified, or accredited, all U.S. Government organizations desiring to conduct classified programs at the facility at...

  1. 6 CFR 5.7 - Classified information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Freedom of Information Act § 5.7 Classified information. In processing a request for information that is classified under Executive Order 12958 (3 CFR, 1996 Comp., p. 333) or any other executive order, the... 6 Domestic Security 1 2010-01-01 2010-01-01 false Classified information. 5.7 Section 5.7...

  2. 15 CFR 4.8 - Classified Information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information..., the information shall be reviewed to determine whether it should remain classified. Ordinarily...

  3. Entropic One-Class Classifiers.

    PubMed

    Livi, Lorenzo; Sadeghian, Alireza; Pedrycz, Witold

    2015-12-01

    The one-class classification problem is a well-known research endeavor in pattern recognition. The problem is also known under different names, such as outlier and novelty/anomaly detection. The core of the problem consists in modeling and recognizing patterns belonging only to a so-called target class. All other patterns are termed nontarget, and therefore, they should be recognized as such. In this paper, we propose a novel one-class classification system that is based on an interplay of different techniques. Primarily, we follow a dissimilarity representation-based approach; we embed the input data into the dissimilarity space (DS) by means of an appropriate parametric dissimilarity measure. This step allows us to process virtually any type of data. The dissimilarity vectors are then represented by weighted Euclidean graphs, which we use to determine the entropy of the data distribution in the DS and at the same time to derive effective decision regions that are modeled as clusters of vertices. Since the dissimilarity measure for the input data is parametric, we optimize its parameters by means of a global optimization scheme, which considers both mesoscopic and structural characteristics of the data represented through the graphs. The proposed one-class classifier is designed to provide both hard (Boolean) and soft decisions about the recognition of test patterns, allowing an accurate description of the classification process. We evaluate the performance of the system on different benchmarking data sets, containing either feature-based or structured patterns. Experimental results demonstrate the effectiveness of the proposed technique. PMID:25879977

  4. A configurable-hardware document-similarity classifier to detect web attacks.

    SciTech Connect

    Ulmer, Craig D.; Gokhale, Maya

    2010-04-01

    This paper describes our approach to adapting a text document similarity classifier based on the Term Frequency Inverse Document Frequency (TFIDF) metric to reconfigurable hardware. The TFIDF classifier is used to detect web attacks in HTTP data. In our reconfigurable hardware approach, we design a streaming, real-time classifier by simplifying an existing sequential algorithm and manipulating the classifier's model to allow decision information to be represented compactly. We have developed a set of software tools to help automate the process of converting training data to synthesizable hardware and to provide a means of trading off between accuracy and resource utilization. The Xilinx Virtex 5-LX implementation requires two orders of magnitude less memory than the original algorithm. At 166MB/s (80X the software) the hardware implementation is able to achieve Gigabit network throughput at the same accuracy as the original algorithm.

  5. Wildfire smoke detection using temporospatial features and random forest classifiers

    NASA Astrophysics Data System (ADS)

    Ko, Byoungchul; Kwak, Joon-Young; Nam, Jae-Yeal

    2012-01-01

    We propose a wildfire smoke detection algorithm that uses temporospatial visual features and an ensemble of decision trees and random forest classifiers. In general, wildfire smoke detection is particularly important for early warning systems because smoke is usually generated before flames; in addition, smoke can be detected from a long distance owing to its diffusion characteristics. In order to detect wildfire smoke using a video camera, temporospatial characteristics such as color, wavelet coefficients, motion orientation, and a histogram of oriented gradients are extracted from the preceding 100 corresponding frames and the current keyframe. Two RFs are then trained using independent temporal and spatial feature vectors. Finally, a candidate block is declared as a smoke block if the average probability of two RFs in a smoke class is maximum. The proposed algorithm was successfully applied to various wildfire-smoke and smoke-colored videos and performed better than other related algorithms.

  6. A binary ant colony optimization classifier for molecular activities.

    PubMed

    Hammann, Felix; Suenderhauf, Claudia; Huwyler, Jörg

    2011-10-24

    Chemical fingerprints encode the presence or absence of molecular features and are available in many large databases. Using a variation of the Ant Colony Optimization (ACO) paradigm, we describe a binary classifier based on feature selection from fingerprints. We discuss the algorithm and possible cross-validation procedures. As a real-world example, we use our algorithm to analyze a Plasmodium falciparum inhibition assay and contrast its performance with other machine learning paradigms in use today (decision tree induction, random forests, support vector machines, artificial neural networks). Our algorithm matches established paradigms in predictive power, yet supplies the medicinal chemist and basic researcher with easily interpretable results. Furthermore, models generated with our paradigm are easy to implement and can complement virtual screenings by additionally exploiting the precalculated fingerprint information. PMID:21854036

  7. Regularized logistic regression and multiobjective variable selection for classifying MEG data.

    PubMed

    Santana, Roberto; Bielza, Concha; Larrañaga, Pedro

    2012-09-01

    This paper addresses the question of maximizing classifier accuracy for classifying task-related mental activity from Magnetoencelophalography (MEG) data. We propose the use of different sources of information and introduce an automatic channel selection procedure. To determine an informative set of channels, our approach combines a variety of machine learning algorithms: feature subset selection methods, classifiers based on regularized logistic regression, information fusion, and multiobjective optimization based on probabilistic modeling of the search space. The experimental results show that our proposal is able to improve classification accuracy compared to approaches whose classifiers use only one type of MEG information or for which the set of channels is fixed a priori. PMID:22854976

  8. Development of the Landsat Data Continuity Mission Cloud Cover Assessment Algorithms

    USGS Publications Warehouse

    Scaramuzza, Pat; Bouchard, M.A.; Dwyer, J.L.

    2012-01-01

    The upcoming launch of the Operational Land Imager (OLI) will start the next era of the Landsat program. However, the Automated Cloud-Cover Assessment (CCA) (ACCA) algorithm used on Landsat 7 requires a thermal band and is thus not suited for OLI. There will be a thermal instrument on the Landsat Data Continuity Mission (LDCM)-the Thermal Infrared Sensor-which may not be available during all OLI collections. This illustrates a need for CCA for LDCM in the absence of thermal data. To research possibilities for full-resolution OLI cloud assessment, a global data set of 207 Landsat 7 scenes with manually generated cloud masks was created. It was used to evaluate the ACCA algorithm, showing that the algorithm correctly classified 79.9% of a standard test subset of 3.95 109 pixels. The data set was also used to develop and validate two successor algorithms for use with OLI data-one derived from an off-the-shelf machine learning package and one based on ACCA but enhanced by a simple neural network. These comprehensive CCA algorithms were shown to correctly classify pixels as cloudy or clear 88.5% and 89.7% of the time, respectively.

  9. Classifier mills for coal grinding and drying

    SciTech Connect

    Galk, J.; Peukert, W.

    1995-12-31

    This report presents a special air classifier mill for coal grinding. Air classifier mills use the two fundamental process steps grinding and classifying in one machine. An essential advantage is the independent operation of grinding rotor speed and classifier rotor speed. This offers good control of the produced particle size distribution and great flexibility in process control. Using an air classifier mill for grinding coal followed by direct injection into the firing chamber allows for good control of burnout. Another advantage is that drying of coal can take place as a parallel step by heating process air passing through the classifier mill. In this report an air classifier mill, some typical process data, possible throughput, and an industrial application are shown.

  10. Standardizing the Protocol for Hemispherical Photographs: Accuracy Assessment of Binarization Algorithms

    PubMed Central

    Glatthorn, Jonas; Beckschäfer, Philip

    2014-01-01

    Hemispherical photography is a well-established method to optically assess ecological parameters related to plant canopies; e.g. ground-level light regimes and the distribution of foliage within the crown space. Interpreting hemispherical photographs involves classifying pixels as either sky or vegetation. A wide range of automatic thresholding or binarization algorithms exists to classify the photographs. The variety in methodology hampers ability to compare results across studies. To identify an optimal threshold selection method, this study assessed the accuracy of seven binarization methods implemented in software currently available for the processing of hemispherical photographs. Therefore, binarizations obtained by the algorithms were compared to reference data generated through a manual binarization of a stratified random selection of pixels. This approach was adopted from the accuracy assessment of map classifications known from remote sensing studies. Percentage correct () and kappa-statistics () were calculated. The accuracy of the algorithms was assessed for photographs taken with automatic exposure settings (auto-exposure) and photographs taken with settings which avoid overexposure (histogram-exposure). In addition, gap fraction values derived from hemispherical photographs were compared with estimates derived from the manually classified reference pixels. All tested algorithms were shown to be sensitive to overexposure. Three of the algorithms showed an accuracy which was high enough to be recommended for the processing of histogram-exposed hemispherical photographs: “Minimum” ( 98.8%; 0.952), “Edge Detection” ( 98.1%; 0.950), and “Minimum Histogram” ( 98.1%; 0.947). The Minimum algorithm overestimated gap fraction least of all (11%). The overestimation by the algorithms Edge Detection (63%) and Minimum Histogram (67%) were considerably larger. For the remaining four evaluated algorithms (IsoData, Maximum Entropy, MinError, and Otsu) an

  11. A hybrid classifier using the parallelepiped and Bayesian techniques. [for multispectral image data

    NASA Technical Reports Server (NTRS)

    Addington, J. D.

    1975-01-01

    A versatile classification scheme is developed which uses the best features of the parallelepiped algorithm and the Bayesian maximum likelihood algorithm. The parallelepiped technique has the advantage of being very fast, especially when implemented into a table look-up scheme; its disadvantage is its inability to distinguish and classify spectral signatures which are similar in nature. This disadvantage is eliminated by the Bayesian technique which is capable of distinguishing subtle differences very well. The hybrid algorithm developed reduces computer time by as much as 90%. A two- and n-dimensional description of the hybrid classifier is given.

  12. Correction algorithm for finite sample statistics.

    PubMed

    Pöschel, T; Ebeling, W; Frömmel, C; Ramírez, R

    2003-12-01

    Assume in a sample of size M one finds M(i) representatives of species i with i = 1..N*. The normalized frequency pi* triple bond Mi/M, based on the finite sample, may deviate considerably from the true probabilities p(i). We propose a method to infer rank-ordered true probabilities r(i) from measured frequencies M(i). We show that the rank-ordered probabilities provide important informations on the system, e.g., the true number of species, the Shannon- and the Renyi-entropies. PMID:15007750

  13. Development and Testing of Data Mining Algorithms for Earth Observation

    NASA Technical Reports Server (NTRS)

    Glymour, Clark

    2005-01-01

    The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.

  14. Cancer Classification in Microarray Data using a Hybrid Selective Independent Component Analysis and υ-Support Vector Machine Algorithm.

    PubMed

    Saberkari, Hamidreza; Shamsi, Mousa; Joroughi, Mahsa; Golabi, Faegheh; Sedaaghi, Mohammad Hossein

    2014-10-01

    Microarray data have an important role in identification and classification of the cancer tissues. Having a few samples of microarrays in cancer researches is always one of the most concerns which lead to some problems in designing the classifiers. For this matter, preprocessing gene selection techniques should be utilized before classification to remove the noninformative genes from the microarray data. An appropriate gene selection method can significantly improve the performance of cancer classification. In this paper, we use selective independent component analysis (SICA) for decreasing the dimension of microarray data. Using this selective algorithm, we can solve the instability problem occurred in the case of employing conventional independent component analysis (ICA) methods. First, the reconstruction error and selective set are analyzed as independent components of each gene, which have a small part in making error in order to reconstruct new sample. Then, some of the modified support vector machine (υ-SVM) algorithm sub-classifiers are trained, simultaneously. Eventually, the best sub-classifier with the highest recognition rate is selected. The proposed algorithm is applied on three cancer datasets (leukemia, breast cancer and lung cancer datasets), and its results are compared with other existing methods. The results illustrate that the proposed algorithm (SICA + υ-SVM) has higher accuracy and validity in order to increase the classification accuracy. Such that, our proposed algorithm exhibits relative improvements of 3.3% in correctness rate over ICA + SVM and SVM algorithms in lung cancer dataset. PMID:25426433

  15. Bayesian technique for image classifying registration.

    PubMed

    Hachama, Mohamed; Desolneux, Agnès; Richard, Frédéric J P

    2012-09-01

    In this paper, we address a complex image registration issue arising while the dependencies between intensities of images to be registered are not spatially homogeneous. Such a situation is frequently encountered in medical imaging when a pathology present in one of the images modifies locally intensity dependencies observed on normal tissues. Usual image registration models, which are based on a single global intensity similarity criterion, fail to register such images, as they are blind to local deviations of intensity dependencies. Such a limitation is also encountered in contrast-enhanced images where there exist multiple pixel classes having different properties of contrast agent absorption. In this paper, we propose a new model in which the similarity criterion is adapted locally to images by classification of image intensity dependencies. Defined in a Bayesian framework, the similarity criterion is a mixture of probability distributions describing dependencies on two classes. The model also includes a class map which locates pixels of the two classes and weighs the two mixture components. The registration problem is formulated both as an energy minimization problem and as a maximum a posteriori estimation problem. It is solved using a gradient descent algorithm. In the problem formulation and resolution, the image deformation and the class map are estimated simultaneously, leading to an original combination of registration and classification that we call image classifying registration. Whenever sufficient information about class location is available in applications, the registration can also be performed on its own by fixing a given class map. Finally, we illustrate the interest of our model on two real applications from medical imaging: template-based segmentation of contrast-enhanced images and lesion detection in mammograms. We also conduct an evaluation of our model on simulated medical data and show its ability to take into account spatial variations

  16. Heuristic ternary error-correcting output codes via weight optimization and layered clustering-based approach.

    PubMed

    Zhang, Xiao-Lei

    2015-02-01

    One important classifier ensemble for multiclass classification problems is error-correcting output codes (ECOCs). It bridges multiclass problems and binary-class classifiers by decomposing multiclass problems to a serial binary-class problems. In this paper, we present a heuristic ternary code, named weight optimization and layered clustering-based ECOC (WOLC-ECOC). It starts with an arbitrary valid ECOC and iterates the following two steps until the training risk converges. The first step, named layered clustering-based ECOC (LC-ECOC), constructs multiple strong classifiers on the most confusing binary-class problem. The second step adds the new classifiers to ECOC by a novel optimized weighted (OW) decoding algorithm, where the optimization problem of the decoding is solved by the cutting plane algorithm. Technically, LC-ECOC makes the heuristic training process not blocked by some difficult binary-class problem. OW decoding guarantees the nonincrease of the training risk for ensuring a small code length. Results on 14 UCI datasets and a music genre classification problem demonstrate the effectiveness of WOLC-ECOC. PMID:25486660

  17. Classifying rock lithofacies using petrophysical data

    NASA Astrophysics Data System (ADS)

    Al-Omair, Osamah; Garrouch, Ali A.

    2010-09-01

    This study automates a type-curve technique for estimating the rock pore-geometric factor (λ) from capillary pressure measurements. The pore-geometric factor is determined by matching the actual rock capillary pressure versus wetting-phase saturation (Pc-Sw) profile with that obtained from the Brooks and Corey model (1966 J. Irrigation Drainage Proc. Am. Soc. Civ. Eng. 61-88). The pore-geometric factor values are validated by comparing the actual measured rock permeability to the permeability values estimated using the Wyllie and Gardner model (1958 World Oil (April issue) 210-28). Petrophysical data for both carbonate and sandstone rocks, along with the pore-geometric factor derived from the type-curve matching, are used in a discriminant analysis for the purpose of developing a model for rock typing. The petrophysical parameters include rock porosity (phi), irreducible water saturation (Swi), permeability (k), the threshold capillary-entry-pressure (Pd), a pore-shape factor (β), and a flow-impedance parameter (n) which is a property that reflects the flow impedance caused by the irreducible wetting-phase saturation. The results of the discriminant analysis indicate that five of the parameters (phi, k, Pd, λ and n) are sufficient for classifying rocks according to two broad lithology classes: sandstones and carbonates. The analysis reveals the existence of a significant discriminant function that is mostly sensitive to the pore-geometric factor values (λ). A discriminant-analysis classification model that honours both static and dynamic petrophysical rock properties is, therefore, introduced. When tested on two distinct data sets, the discriminant-analysis model was able to predict the correct lithofacies for approximately 95% of the tested samples. A comprehensive database of the experimentally collected petrophysical properties of 215 carbonate and sandstone rocks is provided with this study.

  18. Statistical and Machine-Learning Classifier Framework to Improve Pulse Shape Discrimination System Design

    SciTech Connect

    Wurtz, R.; Kaplan, A.

    2015-10-28

    Pulse shape discrimination (PSD) is a variety of statistical classifier. Fully-­realized statistical classifiers rely on a comprehensive set of tools for designing, building, and implementing. PSD advances rely on improvements to the implemented algorithm. PSD advances can be improved by using conventional statistical classifier or machine learning methods. This paper provides the reader with a glossary of classifier-­building elements and their functions in a fully-­designed and operational classifier framework that can be used to discover opportunities for improving PSD classifier projects. This paper recommends reporting the PSD classifier’s receiver operating characteristic (ROC) curve and its behavior at a gamma rejection rate (GRR) relevant for realistic applications.

  19. A hybrid classifier for remote sensing applications.

    PubMed

    Ruppert, G S; Schardt, M; Balzuweit, G; Hussain, M

    1997-02-01

    This paper presents a hybrid-unsupervised and supervised-classifier for land use classification of remote sensing images. The entire satellite image is quantized by an unsupervised Neural Gas process and the resulting codebook is labeled by a supervised majority voting process using the ground truth. The performance of the classifier is similar to that of Maximum Likelihood and is only a little worse than Multilayer Perceptions while training and classifying requires no expert knowledge after collecting the ground truth. The hybrid classifier is much better suited to classifications with complex non-normally distributed classes than Maximum Likelihood. The main advantage of the Neural Gas classifier, however, is that it requires much less user interaction than other classifiers, especially Maximum Likelihood. PMID:9228578

  20. Validating chronic disease ascertainment algorithms for use in the Canadian longitudinal study on aging.

    PubMed

    Oremus, Mark; Postuma, Ronald; Griffith, Lauren; Balion, Cynthia; Wolfson, Christina; Kirkland, Susan; Patterson, Christopher; Shannon, Harry S; Raina, Parminder

    2013-09-01

    We validated seven chronic disease ascertainment algorithms for use in the Canadian Longitudinal Study on Aging. The algorithms pertained to diabetes mellitus type 2, parkinsonism, chronic airflow obstruction (CAO), hand osteoarthritis (OA), hip OA, knee OA, and ischemic heart disease. Our target recruitment was 20 cases and controls per disease; some cases were controls for unrelated diseases. Participants completed interviewer-administered disease symptom and medication use questionnaires. Diabetes cases and controls underwent fasting glucose testing; CAO cases and controls underwent spirometry testing. For each disease, the appropriate algorithm was used to classify participants' disease status (positive or negative for disease). We also calculated sensitivity and specificity using physician diagnosis as the reference standard. The final sample involved 176 participants recruited in three Canadian cities between 2009 and 2011. Most estimated sensitivities and specificities were 80 per cent or more, indicating that the seven algorithms correctly identified individuals with the target disease. PMID:23924995

  1. Confidence measure and performance evaluation for HRRR-based classifiers

    NASA Astrophysics Data System (ADS)

    Rago, Constantino; Zajic, Tim; Huff, Melvyn; Mehra, Raman K.; Mahler, Ronald P. S.; Noviskey, Michael J.

    2002-07-01

    The work presented here is a continuation of research first reported in Mahler, et. al. Our earlier efforts included integrating the Statistical Features algorithm with a Bayesian nonlinear filter, allowing simultaneous determination of target position, velocity, pose and type via maximum a posteriori estimation. We then considered three alternative classifiers: the first based on a principal component decomposition, the second on a linear discriminant approach, and the third on a wavelet representation. In addition, preliminary results were given with regards to assigning a measure of confidence to the output of the wavelet based classifier. In this paper we continue to address the problem of target classification based on high range resolution radar signatures. In particular, we examine the performance of a variant of the principal component based classifier as the number of principal components is varied. We have chosen to quantify the performance in terms of the Bhattacharyya distance. We also present further results regarding the assignment of confidence values to the output of the wavelet based classifier.

  2. Weighted Hybrid Decision Tree Model for Random Forest Classifier

    NASA Astrophysics Data System (ADS)

    Kulkarni, Vrushali Y.; Sinha, Pradeep K.; Petare, Manisha C.

    2016-06-01

    Random Forest is an ensemble, supervised machine learning algorithm. An ensemble generates many classifiers and combines their results by majority voting. Random forest uses decision tree as base classifier. In decision tree induction, an attribute split/evaluation measure is used to decide the best split at each node of the decision tree. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation among them. The work presented in this paper is related to attribute split measures and is a two step process: first theoretical study of the five selected split measures is done and a comparison matrix is generated to understand pros and cons of each measure. These theoretical results are verified by performing empirical analysis. For empirical analysis, random forest is generated using each of the five selected split measures, chosen one at a time. i.e. random forest using information gain, random forest using gain ratio, etc. The next step is, based on this theoretical and empirical analysis, a new approach of hybrid decision tree model for random forest classifier is proposed. In this model, individual decision tree in Random Forest is generated using different split measures. This model is augmented by weighted voting based on the strength of individual tree. The new approach has shown notable increase in the accuracy of random forest.

  3. Classifying Spectra Based on DLS and Rough Set

    NASA Astrophysics Data System (ADS)

    Qiu, Bo; Hu, Zhanyi; Zhao, Yongheng

    2003-01-01

    Until now, it is still difficult to identify different kinds of celestial bodies depending on their spectra, because it needs a lot of astronomers" manual work of measuring, marking and identifying, which is generally very hard and time-consuming. And with the exploding spectral data from all kinds of telescopes, it is becoming more and more urgent to find a thoroughly automatic way to deal with such a kind of problem. In fact, when we change our viewpoint, we can find that it is a traditional problem in pattern recognition field when considering the whole process of dealing with spectral signals: filtering noises, extracting features, constructing classifiers, etc. The main purpose for automatic classification and recognition of spectra in LAMOST (Large Sky Area Multi-Object Fibre Spectroscopic Telescope) project is to identify a celestial body"s type only based on its spectrum. For this purpose, one of the key steps is to establish a good model to describe all kinds of spectra and thus it will be available to construct some excellent classifiers. In this paper, we present a novel describing language to represent spectra. And then, based on the language, we use some algorithms to extract classifying rules from raw spectra datasets and then construct classifiers to identify spectra by using rough set method. Compared with other methods, our technique is more similar to man"s thinking way, and to some extent, efficient.

  4. Generating fuzzy rules for constructing interpretable classifier of diabetes disease.

    PubMed

    Settouti, Nesma; Chikh, M Amine; Saidi, Meryem

    2012-09-01

    Diabetes is a type of disease in which the body fails to regulate the amount of glucose necessary for the body. It does not allow the body to produce or properly use insulin. Diabetes has widespread fallout, with a large people affected by it in world. In this paper; we demonstrate that a fuzzy c-means-neuro-fuzzy rule-based classifier of diabetes disease with an acceptable interpretability is obtained. The accuracy of the classifier is measured by the number of correctly recognized diabetes record while its complexity is measured by the number of fuzzy rules extracted. Experimental results show that the proposed fuzzy classifier can achieve a good tradeoff between the accuracy and interpretability. Also the basic structure of the fuzzy rules which were automatically extracted from the UCI Machine learning database shows strong similarities to the rules applied by human experts. Results are compared to other approaches in the literature. The proposed approach gives more compact, interpretable and accurate classifier. PMID:22895813

  5. Classifier Subset Selection for the Stacked Generalization Method Applied to Emotion Recognition in Speech.

    PubMed

    Álvarez, Aitor; Sierra, Basilio; Arruti, Andoni; López-Gil, Juan-Miguel; Garay-Vitoria, Nestor

    2015-01-01

    In this paper, a new supervised classification paradigm, called classifier subset selection for stacked generalization (CSS stacking), is presented to deal with speech emotion recognition. The new approach consists of an improvement of a bi-level multi-classifier system known as stacking generalization by means of an integration of an estimation of distribution algorithm (EDA) in the first layer to select the optimal subset from the standard base classifiers. The good performance of the proposed new paradigm was demonstrated over different configurations and datasets. First, several CSS stacking classifiers were constructed on the RekEmozio dataset, using some specific standard base classifiers and a total of 123 spectral, quality and prosodic features computed using in-house feature extraction algorithms. These initial CSS stacking classifiers were compared to other multi-classifier systems and the employed standard classifiers built on the same set of speech features. Then, new CSS stacking classifiers were built on RekEmozio using a different set of both acoustic parameters (extended version of the Geneva Minimalistic Acoustic Parameter Set (eGeMAPS)) and standard classifiers and employing the best meta-classifier of the initial experiments. The performance of these two CSS stacking classifiers was evaluated and compared. Finally, the new paradigm was tested on the well-known Berlin Emotional Speech database. We compared the performance of single, standard stacking and CSS stacking systems using the same parametrization of the second phase. All of the classifications were performed at the categorical level, including the six primary emotions plus the neutral one. PMID:26712757

  6. Classifier Subset Selection for the Stacked Generalization Method Applied to Emotion Recognition in Speech

    PubMed Central

    Álvarez, Aitor; Sierra, Basilio; Arruti, Andoni; López-Gil, Juan-Miguel; Garay-Vitoria, Nestor

    2015-01-01

    In this paper, a new supervised classification paradigm, called classifier subset selection for stacked generalization (CSS stacking), is presented to deal with speech emotion recognition. The new approach consists of an improvement of a bi-level multi-classifier system known as stacking generalization by means of an integration of an estimation of distribution algorithm (EDA) in the first layer to select the optimal subset from the standard base classifiers. The good performance of the proposed new paradigm was demonstrated over different configurations and datasets. First, several CSS stacking classifiers were constructed on the RekEmozio dataset, using some specific standard base classifiers and a total of 123 spectral, quality and prosodic features computed using in-house feature extraction algorithms. These initial CSS stacking classifiers were compared to other multi-classifier systems and the employed standard classifiers built on the same set of speech features. Then, new CSS stacking classifiers were built on RekEmozio using a different set of both acoustic parameters (extended version of the Geneva Minimalistic Acoustic Parameter Set (eGeMAPS)) and standard classifiers and employing the best meta-classifier of the initial experiments. The performance of these two CSS stacking classifiers was evaluated and compared. Finally, the new paradigm was tested on the well-known Berlin Emotional Speech database. We compared the performance of single, standard stacking and CSS stacking systems using the same parametrization of the second phase. All of the classifications were performed at the categorical level, including the six primary emotions plus the neutral one. PMID:26712757

  7. Unsupervised Online Classifier in Sleep Scoring for Sleep Deprivation Studies

    PubMed Central

    Libourel, Paul-Antoine; Corneyllie, Alexandra; Luppi, Pierre-Hervé; Chouvet, Guy; Gervasoni, Damien

    2015-01-01

    Study Objective: This study was designed to evaluate an unsupervised adaptive algorithm for real-time detection of sleep and wake states in rodents. Design: We designed a Bayesian classifier that automatically extracts electroencephalogram (EEG) and electromyogram (EMG) features and categorizes non-overlapping 5-s epochs into one of the three major sleep and wake states without any human supervision. This sleep-scoring algorithm is coupled online with a new device to perform selective paradoxical sleep deprivation (PSD). Settings: Controlled laboratory settings for chronic polygraphic sleep recordings and selective PSD. Participants: Ten adult Sprague-Dawley rats instrumented for chronic polysomnographic recordings Measurements: The performance of the algorithm is evaluated by comparison with the score obtained by a human expert reader. Online detection of PS is then validated with a PSD protocol with duration of 72 hours. Results: Our algorithm gave a high concordance with human scoring with an average κ coefficient > 70%. Notably, the specificity to detect PS reached 92%. Selective PSD using real-time detection of PS strongly reduced PS amounts, leaving only brief PS bouts necessary for the detection of PS in EEG and EMG signals (4.7 ± 0.7% over 72 h, versus 8.9 ± 0.5% in baseline), and was followed by a significant PS rebound (23.3 ± 3.3% over 150 minutes). Conclusions: Our fully unsupervised data-driven algorithm overcomes some limitations of the other automated methods such as the selection of representative descriptors or threshold settings. When used online and coupled with our sleep deprivation device, it represents a better option for selective PSD than other methods like the tedious gentle handling or the platform method. Citation: Libourel PA, Corneyllie A, Luppi PH, Chouvet G, Gervasoni D. Unsupervised online classifier in sleep scoring for sleep deprivation studies. SLEEP 2015;38(5):815–828. PMID:25325478

  8. Walking Objectively Measured: Classifying Accelerometer Data with GPS and Travel Diaries

    PubMed Central

    Kang, Bumjoon; Moudon, Anne V.; Hurvitz, Philip M.; Reichley, Lucas; Saelens, Brian E.

    2013-01-01

    Purpose This study developed and tested an algorithm to classify accelerometer data as walking or non-walking using either GPS or travel diary data within a large sample of adults under free-living conditions. Methods Participants wore an accelerometer and a GPS unit, and concurrently completed a travel diary for 7 consecutive days. Physical activity (PA) bouts were identified using accelerometry count sequences. PA bouts were then classified as walking or non-walking based on a decision-tree algorithm consisting of 7 classification scenarios. Algorithm reliability was examined relative to two independent analysts’ classification of a 100-bout verification sample. The algorithm was then applied to the entire set of PA bouts. Results The 706 participants’ (mean age 51 years, 62% female, 80% non-Hispanic white, 70% college graduate or higher) yielded 4,702 person-days of data and had a total of 13,971 PA bouts. The algorithm showed a mean agreement of 95% with the independent analysts. It classified physical activity into 8,170 (58.5 %) walking bouts and 5,337 (38.2%) non-walking bouts; 464 (3.3%) bouts were not classified for lack of GPS and diary data. Nearly 70% of the walking bouts and 68% of the non-walking bouts were classified using only the objective accelerometer and GPS data. Travel diary data helped classify 30% of all bouts with no GPS data. The mean duration of PA bouts classified as walking was 15.2 min (SD=12.9). On average, participants had 1.7 walking bouts and 25.4 total walking minutes per day. Conclusions GPS and travel diary information can be helpful in classifying most accelerometer-derived PA bouts into walking or non-walking behavior. PMID:23439414

  9. Optimally splitting cases for training and testing high dimensional classifiers

    PubMed Central

    2011-01-01

    Background We consider the problem of designing a study to develop a predictive classifier from high dimensional data. A common study design is to split the sample into a training set and an independent test set, where the former is used to develop the classifier and the latter to evaluate its performance. In this paper we address the question of what proportion of the samples should be devoted to the training set. How does this proportion impact the mean squared error (MSE) of the prediction accuracy estimate? Results We develop a non-parametric algorithm for determining an optimal splitting proportion that can be applied with a specific dataset and classifier algorithm. We also perform a broad simulation study for the purpose of better understanding the factors that determine the best split proportions and to evaluate commonly used splitting strategies (1/2 training or 2/3 training) under a wide variety of conditions. These methods are based on a decomposition of the MSE into three intuitive component parts. Conclusions By applying these approaches to a number of synthetic and real microarray datasets we show that for linear classifiers the optimal proportion depends on the overall number of samples available and the degree of differential expression between the classes. The optimal proportion was found to depend on the full dataset size (n) and classification accuracy - with higher accuracy and smaller n resulting in more assigned to the training set. The commonly used strategy of allocating 2/3rd of cases for training was close to optimal for reasonable sized datasets (n ≥ 100) with strong signals (i.e. 85% or greater full dataset accuracy). In general, we recommend use of our nonparametric resampling approach for determing the optimal split. This approach can be applied to any dataset, using any predictor development method, to determine the best split. PMID:21477282

  10. Automatic class labeling of classified imagery using a hyperspectral library

    NASA Astrophysics Data System (ADS)

    Parshakov, Ilia

    Image classification is a fundamental information extraction procedure in remote sensing that is used in land-cover and land-use mapping. Despite being considered as a replacement for manual mapping, it still requires some degree of analyst intervention. This makes the process of image classification time consuming, subjective, and error prone. For example, in unsupervised classification, pixels are automatically grouped into classes, but the user has to manually label the classes as one land-cover type or another. As a general rule, the larger the number of classes, the more difficult it is to assign meaningful class labels. A fully automated post-classification procedure for class labeling was developed in an attempt to alleviate this problem. It labels spectral classes by matching their spectral characteristics with reference spectra. A Landsat TM image of an agricultural area was used for performance assessment. The algorithm was used to label a 20- and 100-class image generated by the ISODATA classifier. The 20-class image was used to compare the technique with the traditional manual labeling of classes, and the 100-class image was used to compare it with the Spectral Angle Mapper and Maximum Likelihood classifiers. The proposed technique produced a map that had an overall accuracy of 51%, outperforming the manual labeling (40% to 45% accuracy, depending on the analyst performing the labeling) and the Spectral Angle Mapper classifier (39%), but underperformed compared to the Maximum Likelihood technique (53% to 63%). The newly developed class-labeling algorithm provided better results for alfalfa, beans, corn, grass and sugar beet, whereas canola, corn, fallow, flax, potato, and wheat were identified with similar or lower accuracy, depending on the classifier it was compared with.

  11. 48 CFR 927.207 - Classified contracts.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 48 Federal Acquisition Regulations System 5 2012-10-01 2012-10-01 false Classified contracts. 927.207 Section 927.207 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Patents 927.207 Classified contracts....

  12. 48 CFR 927.207 - Classified contracts.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Classified contracts. 927.207 Section 927.207 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Patents 927.207 Classified contracts....

  13. 48 CFR 927.207 - Classified contracts.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Classified contracts. 927.207 Section 927.207 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Patents 927.207 Classified contracts....

  14. 48 CFR 927.207 - Classified contracts.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 48 Federal Acquisition Regulations System 5 2013-10-01 2013-10-01 false Classified contracts. 927.207 Section 927.207 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Patents 927.207 Classified contracts....

  15. 48 CFR 927.207 - Classified contracts.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Classified contracts. 927.207 Section 927.207 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Patents 927.207 Classified contracts....

  16. 32 CFR 775.5 - Classified actions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 5 2010-07-01 2010-07-01 false Classified actions. 775.5 Section 775.5 National Defense Department of Defense (Continued) DEPARTMENT OF THE NAVY MISCELLANEOUS RULES PROCEDURES FOR IMPLEMENTING THE NATIONAL ENVIRONMENTAL POLICY ACT § 775.5 Classified actions. (a) The fact that a...

  17. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  18. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  19. An improved algorithm for the automatic detection and characterization of slow eye movements.

    PubMed

    Cona, Filippo; Pizza, Fabio; Provini, Federica; Magosso, Elisa

    2014-07-01

    Slow eye movements (SEMs) are typical of drowsy wakefulness and light sleep. SEMs still lack of systematic physical characterization. We present a new algorithm, which substantially improves our previous one, for the automatic detection of SEMs from the electro-oculogram (EOG) and extraction of SEMs physical parameters. The algorithm utilizes discrete wavelet decomposition of the EOG to implement a Bayes classifier that identifies intervals of slow ocular activity; each slow activity interval is segmented into single SEMs via a template matching method. Parameters of amplitude, duration, velocity are automatically extracted from each detected SEM. The algorithm was trained and validated on sleep onsets and offsets of 20 EOG recordings visually inspected by an expert. Performances were assessed in terms of correctly identified slow activity epochs (sensitivity: 85.12%; specificity: 82.81%), correctly segmented single SEMs (89.08%), and time misalignment (0.49 s) between the automatically and visually identified SEMs. The algorithm proved reliable even in whole sleep (sensitivity: 83.40%; specificity: 72.08% in identifying slow activity epochs; correctly segmented SEMs: 93.24%; time misalignment: 0.49 s). The algorithm, being able to objectively characterize single SEMs, may be a valuable tool to improve knowledge of normal and pathological sleep. PMID:24768562

  20. Enhancing atlas based segmentation with multiclass linear classifiers

    SciTech Connect

    Sdika, Michaël

    2015-12-15

    Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible local registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy.

  1. Effects of cultural characteristics on building an emotion classifier through facial expression analysis

    NASA Astrophysics Data System (ADS)

    da Silva, Flávio Altinier Maximiano; Pedrini, Helio

    2015-03-01

    Facial expressions are an important demonstration of humanity's humors and emotions. Algorithms capable of recognizing facial expressions and associating them with emotions were developed and employed to compare the expressions that different cultural groups use to show their emotions. Static pictures of predominantly occidental and oriental subjects from public datasets were used to train machine learning algorithms, whereas local binary patterns, histogram of oriented gradients (HOGs), and Gabor filters were employed to describe the facial expressions for six different basic emotions. The most consistent combination, formed by the association of HOG filter and support vector machines, was then used to classify the other cultural group: there was a strong drop in accuracy, meaning that the subtle differences of facial expressions of each culture affected the classifier performance. Finally, a classifier was trained with images from both occidental and oriental subjects and its accuracy was higher on multicultural data, evidencing the need of a multicultural training set to build an efficient classifier.

  2. Block-classified motion compensation scheme for digital video

    SciTech Connect

    Zafar, S.; Zhang, Ya-Qin; Jabbari, B.

    1996-03-01

    A novel scheme for block-based motion compensation is introduced in which a block is classified according to the energy that is directly related to the motion activity it represents. This classification allows more flexibility in controlling the bit rate arid the signal-to-noise ratio and results in a reduction in motion search complexity. The method introduced is not dependent on the particular type of motion search algorithm implemented and can thus be used with any method assuming that the underlying matching criteria used is minimum absolute difference. It has been shown that the method is superior to a simple motion compensation algorithm where all blocks are motion compensated regardless of the energy resulting after the displaced difference.

  3. Organizational coevolutionary classifiers with fuzzy logic used in intrusion detection

    NASA Astrophysics Data System (ADS)

    Chen, Zhenguo

    2009-07-01

    Intrusion detection is an important technique in the defense-in-depth network security framework and a hot topic in computer security in recent years. To solve the intrusion detection question, we introduce the fuzzy logic into Organization CoEvolutionary algorithm [1] and present the algorithm of Organization CoEvolutionary Classification with Fuzzy Logic. In this paper, we give an intrusion detection models based on Organization CoEvolutionary Classification with Fuzzy Logic. After illustrating our model with a representative dataset and applying it to the real-world network datasets KDD Cup 1999. The experimental result shown that the intrusion detection based on Organizational Coevolutionary Classifiers with Fuzzy Logic can give higher recognition accuracy than the general method.

  4. Localization Algorithms of Underwater Wireless Sensor Networks: A Survey

    PubMed Central

    Han, Guangjie; Jiang, Jinfang; Shu, Lei; Xu, Yongjun; Wang, Feng

    2012-01-01

    In Underwater Wireless Sensor Networks (UWSNs), localization is one of most important technologies since it plays a critical role in many applications. Motivated by widespread adoption of localization, in this paper, we present a comprehensive survey of localization algorithms. First, we classify localization algorithms into three categories based on sensor nodes’ mobility: stationary localization algorithms, mobile localization algorithms and hybrid localization algorithms. Moreover, we compare the localization algorithms in detail and analyze future research directions of localization algorithms in UWSNs. PMID:22438752

  5. Zigzag configurations and air classifier performance

    SciTech Connect

    Peirce, J.J.; Wittenberg, N.

    1984-03-01

    The fundamental aspects of zigzag air classifier configurations are studied in terms of the design and operation of a waste-to-energy production facility. The development of a method of performance evaluation defined by operating range is examined. Historically, air classification has been used in industry and agriculture in mineral extraction, limestone sizing, and seed and grain cleaning. However, the adaption of air classifiers to resource recovery and waste-to-energy production facilities presents new problems due to the complex and variable nature of the wastes. A series of configurations providing a continuous range of zigzag classifier shape components are tested. Each configuration is evaluated to determine its efficiency of separation, and sensitivity to operating air speeds. Results indicate that the configuration of a zigzag classifier does not influence its peak efficiency of separation. However, findings point to distinct limits on operating parameters which lead to peak efficiencies for the different configurations. These operating range values represent the sensitivity of the air classifier to changes in the air flow. A major finding concerns the effect of configuration on the particle size distribution observed in the material exiting the classifier: smaller particles appear to be influenced by configuration changes and larger particles do not. A new method for classifier performance evaluation is developed and applied.

  6. Correction of Facial Deformity in Sturge–Weber Syndrome

    PubMed Central

    Yamaguchi, Kazuaki; Lonic, Daniel; Chen, Chit

    2016-01-01

    Background: Although previous studies have reported soft-tissue management in surgical treatment of Sturge–Weber syndrome (SWS), there are few reports describing facial bone surgery in this patient group. The purpose of this study is to examine the validity of our multidisciplinary algorithm for correcting facial deformities associated with SWS. To the best of our knowledge, this is the first study on orthognathic surgery for SWS patients. Methods: A retrospective chart review included 2 SWS patients who completed the surgical treatment algorithm. Radiographic and clinical data were recorded, and a treatment algorithm was derived. Results: According to the Roach classification, the first patient was classified as type I presenting with both facial and leptomeningeal vascular anomalies without glaucoma and the second patient as type II presenting only with a hemifacial capillary malformation. Considering positive findings in seizure history and intracranial vascular anomalies in the first case, the anesthetic management was modified to omit hypotensive anesthesia because of the potential risk of intracranial pressure elevation. Primarily, both patients underwent 2-jaw orthognathic surgery and facial bone contouring including genioplasty, zygomatic reduction, buccal fat pad removal, and masseter reduction without major complications. In the second step, the volume and distribution of facial soft tissues were altered by surgical resection and reposition. Both patients were satisfied with the surgical result. Conclusions: Our multidisciplinary algorithm can systematically detect potential risk factors. Correction of the asymmetric face by successive bone and soft-tissue surgery enables the patients to reduce their psychosocial burden and increase their quality of life. PMID:27622111

  7. Receiver operating characteristic for a spectrogram correlator-based humpback whale detector-classifier.

    PubMed

    Abbot, Ted A; Premus, Vincent E; Abbot, Philip A; Mayer, Owen A

    2012-09-01

    This paper presents recent experimental results and a discussion of system enhancements made to the real-time autonomous humpback whale detector-classifier algorithm first presented by Abbot et al. [J. Acoust. Soc. Am. 127, 2894-2903 (2010)]. In February 2010, a second-generation system was deployed in an experiment conducted off of leeward Kauai during which 26 h of humpback vocalizations were recorded via sonobuoy and processed in real time. These data have been analyzed along with 40 h of humpbacks-absent data collected from the same location during July-August 2009. The extensive whales-absent data set in particular has enabled the quantification of system false alarm rates and the measurement of receiver operating characteristic curves. The performance impact of three enhancements incorporated into the second-generation system are discussed, including (1) a method to eliminate redundancy in the kernel library, (2) increased use of contextual analysis, and (3) the augmentation of the training data with more recent humpback vocalizations. It will be shown that the performance of the real-time system was improved to yield a probability of correct classification of 0.93 and a probability of false alarm of 0.004 over the 66 h of independent test data. PMID:22978879

  8. Less-Complex Method of Classifying MPSK

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2006-01-01

    An alternative to an optimal method of automated classification of signals modulated with M-ary phase-shift-keying (M-ary PSK or MPSK) has been derived. The alternative method is approximate, but it offers nearly optimal performance and entails much less complexity, which translates to much less computation time. Modulation classification is becoming increasingly important in radio-communication systems that utilize multiple data modulation schemes and include software-defined or software-controlled receivers. Such a receiver may "know" little a priori about an incoming signal but may be required to correctly classify its data rate, modulation type, and forward error-correction code before properly configuring itself to acquire and track the symbol timing, carrier frequency, and phase, and ultimately produce decoded bits. Modulation classification has long been an important component of military interception of initially unknown radio signals transmitted by adversaries. Modulation classification may also be useful for enabling cellular telephones to automatically recognize different signal types and configure themselves accordingly. The concept of modulation classification as outlined in the preceding paragraph is quite general. However, at the present early stage of development, and for the purpose of describing the present alternative method, the term "modulation classification" or simply "classification" signifies, more specifically, a distinction between M-ary and M'-ary PSK, where M and M' represent two different integer multiples of 2. Both the prior optimal method and the present alternative method require the acquisition of magnitude and phase values of a number (N) of consecutive baseband samples of the incoming signal + noise. The prior optimal method is based on a maximum- likelihood (ML) classification rule that requires a calculation of likelihood functions for the M and M' hypotheses: Each likelihood function is an integral, over a full cycle of

  9. Shell corrections in stopping powers

    NASA Astrophysics Data System (ADS)

    Bichsel, H.

    2002-05-01

    One of the theories of the electronic stopping power S for fast light ions was derived by Bethe. The algorithm currently used for the calculation of S includes terms known as the mean excitation energy I, the shell correction, the Barkas correction, and the Bloch correction. These terms are described here. For the calculation of the shell corrections an atomic model is used, which is more realistic than the hydrogenic approximation used so far. A comparison is made with similar calculations in which the local plasma approximation is utilized. Close agreement with the experimental data for protons with energies from 0.3 to 10 MeV traversing Al and Si is found without the need for adjustable parameters for the shell corrections.

  10. Construction of Pancreatic Cancer Classifier Based on SVM Optimized by Improved FOA

    PubMed Central

    Jiang, Huiyan; Zhao, Di; Zheng, Ruiping; Ma, Xiaoqi

    2015-01-01

    A novel method is proposed to establish the pancreatic cancer classifier. Firstly, the concept of quantum and fruit fly optimal algorithm (FOA) are introduced, respectively. Then FOA is improved by quantum coding and quantum operation, and a new smell concentration determination function is defined. Finally, the improved FOA is used to optimize the parameters of support vector machine (SVM) and the classifier is established by optimized SVM. In order to verify the effectiveness of the proposed method, SVM and other classification methods have been chosen as the comparing methods. The experimental results show that the proposed method can improve the classifier performance and cost less time. PMID:26543867

  11. Design of partially supervised classifiers for multispectral image data

    NASA Technical Reports Server (NTRS)

    Jeon, Byeungwoo; Landgrebe, David

    1993-01-01

    A partially supervised classification problem is addressed, especially when the class definition and corresponding training samples are provided a priori only for just one particular class. In practical applications of pattern classification techniques, a frequently observed characteristic is the heavy, often nearly impossible requirements on representative prior statistical class characteristics of all classes in a given data set. Considering the effort in both time and man-power required to have a well-defined, exhaustive list of classes with a corresponding representative set of training samples, this 'partially' supervised capability would be very desirable, assuming adequate classifier performance can be obtained. Two different classification algorithms are developed to achieve simplicity in classifier design by reducing the requirement of prior statistical information without sacrificing significant classifying capability. The first one is based on optimal significance testing, where the optimal acceptance probability is estimated directly from the data set. In the second approach, the partially supervised classification is considered as a problem of unsupervised clustering with initially one known cluster or class. A weighted unsupervised clustering procedure is developed to automatically define other classes and estimate their class statistics. The operational simplicity thus realized should make these partially supervised classification schemes very viable tools in pattern classification.

  12. How Is Acute Lymphocytic Leukemia Classified?

    MedlinePlus

    ... How is acute lymphocytic leukemia treated? How is acute lymphocytic leukemia classified? Most types of cancers are assigned numbered ... ALL are now named as follows: B-cell ALL Early pre-B ALL (also called pro-B ...

  13. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., (50 U.S.C. 401) Executive Order 12958 provides the only basis for classifying information. Information...) Top Secret. This classification shall be applied only to information the unauthorized disclosure...

  14. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., (50 U.S.C. 401) Executive Order 12958 provides the only basis for classifying information. Information...) Top Secret. This classification shall be applied only to information the unauthorized disclosure...

  15. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., (50 U.S.C. 401) Executive Order 12958 provides the only basis for classifying information. Information...) Top Secret. This classification shall be applied only to information the unauthorized disclosure...

  16. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., (50 U.S.C. 401) Executive Order 12958 provides the only basis for classifying information. Information...) Top Secret. This classification shall be applied only to information the unauthorized disclosure...

  17. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., (50 U.S.C. 401) Executive Order 12958 provides the only basis for classifying information. Information...) Top Secret. This classification shall be applied only to information the unauthorized disclosure...

  18. PPCM: Combing Multiple Classifiers to Improve Protein-Protein Interaction Prediction

    DOE PAGESBeta

    Yao, Jianzhuang; Guo, Hong; Yang, Xiaohan

    2015-01-01

    Determining protein-protein interaction (PPI) in biological systems is of considerable importance, and prediction of PPI has become a popular research area. Although different classifiers have been developed for PPI prediction, no single classifier seems to be able to predict PPI with high confidence. We postulated that by combining individual classifiers the accuracy of PPI prediction could be improved. We developed a method called protein-protein interaction prediction classifiers merger (PPCM), and this method combines output from two PPI prediction tools, GO2PPI and Phyloprof, using Random Forests algorithm. The performance of PPCM was tested by area under the curve (AUC) using anmore » assembled Gold Standard database that contains both positive and negative PPI pairs. Our AUC test showed that PPCM significantly improved the PPI prediction accuracy over the corresponding individual classifiers. We found that additional classifiers incorporated into PPCM could lead to further improvement in the PPI prediction accuracy. Furthermore, cross species PPCM could achieve competitive and even better prediction accuracy compared to the single species PPCM. This study established a robust pipeline for PPI prediction by integrating multiple classifiers using Random Forests algorithm. This pipeline will be useful for predicting PPI in nonmodel species.« less

  19. Improving Performance of Computer-Aided Detection Scheme by Combining Results from Two Machine Learning Classifiers

    PubMed Central

    Zheng, Bin

    2009-01-01

    Rationale and Objectives Global data and local instance based machine learning methods and classifiers have been widely used to optimize computer-aided detection (CAD) schemes to classify between true-positive and false-positive detections. In this study the authors investigated the correlation between these two types of classifiers using a new independent testing dataset and assessed the potential improvement of a CAD scheme performance by combining the results of the two classifiers in detecting breast masses. Materials and Methods The CAD scheme first used image filtering and a multi-layer topographic region growth algorithm to detect and segment suspicious mass regions. The scheme then used an image feature based classifier to classify these regions into true-positive and false-positive regions. Two classifiers were used in this study. One was a global data based machine learning classifier, an artificial neural network (ANN), and the other one was a local instance based machine learning classifier, a k-nearest neighbor (KNN) algorithm. An independent image database involving 400 mammography examinations was used in this study. Among them, 200 were cancer cases and 200 were negative cases. The pre-optimized CAD scheme was applied twice to the database using the two different classifiers. The correlation between the two sets of classification results was analyzed. Three sets of CAD performances using the ANN, KNN, and average detection scores from both classifiers were assessed and compared using the free-response receiver operating characteristics (FROC) method. Results The results showed that the ANN achieved higher performance than the KNN with a normalized area under the performance curve (AUC) of 0.891 versus 0.845. The correlation coefficients between the detection scores generated by the two classifiers were 0.436 and 0.161 for the true-positive and false-positive detections, respectively. The average detection scores of the two classifiers improved CAD

  20. Zigzag configurations and air classifier performance

    SciTech Connect

    Peirce, J.; Wittenberg, N.

    1984-03-01

    The fundamental aspects of zigzag air classifier configurations are studied in terms of the design and operation of a waste-to-energy production facility. The development of a method of performance evaluation defined by operating range is examined. Historically, air classification has been used in industry and agriculture in mineral extraction, limestone sizing, and seed and grain cleaning. However, the adaption of air classifiers to resource recovery and waste-to-energy production facilities presents new problems due to the complex and variable nature of the wastes. A series of configurations providing a continuous range of zigzag classifier shape components are tested. Each configuration is evaluated to determine its efficiency of separation, and sensitivity to operating air speeds. Results indicate that the configuration of a zigzag classifier does not influence its peak efficiency of separation. However, findings point to distinct limits on operating parameters which lead to peak efficiencies for the different configurations. These operating range values represent the sensitivity of the air classifier to changes in the air flow. A major finding concerns the effect of configuration on the particle size distribution observed in the material exiting the classifier: smaller particles appear to be influenced by configuration changes and larger particles do not. A new method for classifer performance evaluation is developed and applied.

  1. Influence of atmospheric correction on image classification for irrigated agriculture in the Lower Colorado River Basin

    NASA Astrophysics Data System (ADS)

    Wei, X.

    2012-12-01

    Atmospheric correction is essential for accurate quantitative information retrieval from satellite imagery. In this paper, we applied the atmospheric correction algorithm, Second Simulation of a Satellite Signal in the Solar Spectrum (6S) radiative transfer code, to retrieve surface reflectance from Landsat 5 Thematic Mapper (TM) imagery for the Palo Verde Irrigation District (PVID) within the lower Colorado River basin. The 6S code was implemented with the input data of visibility, aerosol optical depth, pressure, temperature, water vapour, and ozone from local measurements. The 6S corrected image of PVID was classified into the irrigated agriculture of alfalfa, cotton, melons, corn, grass, and vegetables. We performed multiple classification methods of maximum likelihood, fuzzy means, and object-oriented classification methods. Using field crop type data, we conducted accuracy assessment for the results from 6S corrected image and uncorrected image and found a consistent improvement of classification accuracy for 6S corrected image. The study proves that 6S code is a robust atmospheric correction method in providing a better simulation of surface reflectance and improving image classification accuracy.;

  2. Topological Correction of Multicomponent Systems Polyhedration

    NASA Astrophysics Data System (ADS)

    Lutsyk, V. I.; Vorob'eva, V. P.

    2016-04-01

    An algorithm (Topological Correction of Lists of Simplexes of Different Dimensions) for polyhedration of quaternary reciprocal systems is presented. It can control all polyhedration stages, accelerates the search of internal diagonals and takes into account their possible competition.

  3. Application of Bayesian classifier for the diagnosis of dental pain.

    PubMed

    Chattopadhyay, Subhagata; Davis, Rima M; Menezes, Daphne D; Singh, Gautam; Acharya, Rajendra U; Tamura, Toshio

    2012-06-01

    Toothache is the most common symptom encountered in dental practice. It is subjective and hence, there is a possibility of under or over diagnosis of oral pathologies where patients present with only toothache. Addressing the issue, the paper proposes a methodology to develop a Bayesian classifier for diagnosing some common dental diseases (D = 10) using a set of 14 pain parameters (P = 14). A questionnaire is developed using these variables and filled up by ten dentists (n = 10) with various levels of expertise. Each questionnaire is consisted of 40 real-world cases. Total 14*10*10 combinations of data are hence collected. The reliability of the data (P and D sets) has been tested by measuring (Cronbach's alpha). One-way ANOVA has been used to note the intra and intergroup mean differences. Multiple linear regressions are used for extracting the significant predictors among P and D sets as well as finding the goodness of the model fit. A naïve Bayesian classifier (NBC) is then designed initially that predicts either presence/absence of diseases given a set of pain parameters. The most informative and highest quality datasheet is used for training of NBC and the remaining sheets are used for testing the performance of the classifier. Hill climbing algorithm is used to design a Learned Bayes' classifier (LBC), which learns the conditional probability table (CPT) entries optimally. The developed LBC showed an average accuracy of 72%, which is clinically encouraging to the dentists. PMID:20945154

  4. A new method for classifying different phenotypes of kidney transplantation.

    PubMed

    Zhu, Dong; Liu, Zexian; Pan, Zhicheng; Qian, Mengjia; Wang, Linyan; Zhu, Tongyu; Xue, Yu; Wu, Duojiao

    2016-08-01

    For end-stage renal diseases, kidney transplantation is the most efficient treatment. However, the unexpected rejection caused by inflammation usually leads to allograft failure. Thus, a systems-level characterization of inflammation factors can provide potentially diagnostic biomarkers for predicting renal allograft rejection. Serum of kidney transplant patients with different immune status were collected and classified as transplant patients with stable renal function (ST), impaired renal function with negative biopsy pathology (UNST), acute rejection (AR), and chronic rejection (CR). The expression profiles of 40 inflammatory proteins were measured by quantitative protein microarrays and reduced to a lower dimensional space by the partial least squares (PLS) model. The determined principal components (PCs) were then trained by the support vector machines (SVMs) algorithm for classifying different phenotypes of kidney transplantation. There were 30, 16, and 13 inflammation proteins that showed statistically significant differences between CR and ST, CR and AR, and CR and UNST patients. Further analysis revealed a protein-protein interaction (PPI) network among 33 inflammatory proteins and proposed a potential role of intracellular adhesion molecule-1 (ICAM-1) in CR. Based on the network analysis and protein expression information, two PCs were determined as the major contributors and trained by the PLS-SVMs method, with a promising accuracy of 77.5 % for classification of chronic rejection after kidney transplantation. For convenience, we also developed software packages of GPS-CKT (Classification phenotype of Kidney Transplantation Predictor) for classifying phenotypes. By confirming a strong correlation between inflammation and kidney transplantation, our results suggested that the network biomarker but not single factors can potentially classify different phenotypes in kidney transplantation. PMID:27278387

  5. Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms

    2002-05-01

    We developed new pattern recognition (PR) algorithms based on a human visual perception model. We named these algorithms Visual Empirical Region of Influence (VERI) algorithms. To compare the new algorithm's effectiveness against othe PR algorithms, we benchmarked their clustering capabilities with a standard set of two-dimensional data that is well known in the PR community. The VERI algorithm succeeded in clustering all the data correctly. No existing algorithm had previously clustered all the pattens inmore » the data set successfully. The commands to execute VERI algorithms are quite difficult to master when executed from a DOS command line. The algorithm requires several parameters to operate correctly. From our own experiences we realized that if we wanted to provide a new data analysis tool to the PR community we would have to provide a new data analysis tool to the PR community we would have to make the tool powerful, yet easy and intuitive to use. That was our motivation for developing graphical user interfaces (GUI's) to the VERI algorithms. We developed GUI's to control the VERI algorithm in a single pass mode and in an optimization mode. We also developed a visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization package is integrated into the single pass interface. Both the single pass interface and optimization interface are part of the PR software package we have developed and make available to other users. The single pass mode only finds PR results for the sets of features in the data set that are manually requested by the user. The optimization model uses a brute force method of searching through the cominations of features in a data set for features that produce

  6. Meterological correction of optical beam refraction

    SciTech Connect

    Lukin, V.P.; Melamud, A.E.; Mironov, V.L.

    1986-02-01

    At the present time laser reference systems (LRS's) are widely used in agrotechnology and in geodesy. The demands for accuracy in LRS's constantly increase, so that a study of error sources and means of considering and correcting them is of practical importance. A theoretical algorithm is presented for correction of the regular component of atmospheric refraction for various types of hydrostatic stability of the atmospheric layer adjacent to the earth. The algorithm obtained is compared to regression equations obtained by processing an experimental data base. It is shown that within admissible accuracy limits the refraction correction algorithm obtained permits construction of correction tables and design of optical systems with programmable correction for atmospheric refraction on the basis of rapid meteorological measurements.

  7. A web-based neurological pain classifier tool utilizing Bayesian decision theory for pain classification in spinal cord injury patients

    NASA Astrophysics Data System (ADS)

    Verma, Sneha K.; Chun, Sophia; Liu, Brent J.

    2014-03-01

    Pain is a common complication after spinal cord injury with prevalence estimates ranging 77% to 81%, which highly affects a patient's lifestyle and well-being. In the current clinical setting paper-based forms are used to classify pain correctly, however, the accuracy of diagnoses and optimal management of pain largely depend on the expert reviewer, which in many cases is not possible because of very few experts in this field. The need for a clinical decision support system that can be used by expert and non-expert clinicians has been cited in literature, but such a system has not been developed. We have designed and developed a stand-alone tool for correctly classifying pain type in spinal cord injury (SCI) patients, using Bayesian decision theory. Various machine learning simulation methods are used to verify the algorithm using a pilot study data set, which consists of 48 patients data set. The data set consists of the paper-based forms, collected at Long Beach VA clinic with pain classification done by expert in the field. Using the WEKA as the machine learning tool we have tested on the 48 patient dataset that the hypothesis that attributes collected on the forms and the pain location marked by patients have very significant impact on the pain type classification. This tool will be integrated with an imaging informatics system to support a clinical study that will test the effectiveness of using Proton Beam radiotherapy for treating spinal cord injury (SCI) related neuropathic pain as an alternative to invasive surgical lesioning.

  8. A new approach to identify, classify and count drugrelated events

    PubMed Central

    Bürkle, Thomas; Müller, Fabian; Patapovas, Andrius; Sonst, Anja; Pfistermeister, Barbara; Plank-Kiegele, Bettina; Dormann, Harald; Maas, Renke

    2013-01-01

    Aims The incidence of clinical events related to medication errors and/or adverse drug reactions reported in the literature varies by a degree that cannot solely be explained by the clinical setting, the varying scrutiny of investigators or varying definitions of drug-related events. Our hypothesis was that the individual complexity of many clinical cases may pose relevant limitations for current definitions and algorithms used to identify, classify and count adverse drug-related events. Methods Based on clinical cases derived from an observational study we identified and classified common clinical problems that cannot be adequately characterized by the currently used definitions and algorithms. Results It appears that some key models currently used to describe the relation of medication errors (MEs), adverse drug reactions (ADRs) and adverse drug events (ADEs) can easily be misinterpreted or contain logical inconsistencies that limit their accurate use to all but the simplest clinical cases. A key limitation of current models is the inability to deal with complex interactions such as one drug causing two clinically distinct side effects or multiple drugs contributing to a single clinical event. Using a large set of clinical cases we developed a revised model of the interdependence between MEs, ADEs and ADRs and extended current event definitions when multiple medications cause multiple types of problems. We propose algorithms that may help to improve the identification, classification and counting of drug-related events. Conclusions The new model may help to overcome some of the limitations that complex clinical cases pose to current paper- or software-based drug therapy safety. PMID:24007453

  9. Use of genetic algorithm for the selection of EEG features

    NASA Astrophysics Data System (ADS)

    Asvestas, P.; Korda, A.; Kostopoulos, S.; Karanasiou, I.; Ouzounoglou, A.; Sidiropoulos, K.; Ventouras, E.; Matsopoulos, G.

    2015-09-01

    Genetic Algorithm (GA) is a popular optimization technique that can detect the global optimum of a multivariable function containing several local optima. GA has been widely used in the field of biomedical informatics, especially in the context of designing decision support systems that classify biomedical signals or images into classes of interest. The aim of this paper is to present a methodology, based on GA, for the selection of the optimal subset of features that can be used for the efficient classification of Event Related Potentials (ERPs), which are recorded during the observation of correct or incorrect actions. In our experiment, ERP recordings were acquired from sixteen (16) healthy volunteers who observed correct or incorrect actions of other subjects. The brain electrical activity was recorded at 47 locations on the scalp. The GA was formulated as a combinatorial optimizer for the selection of the combination of electrodes that maximizes the performance of the Fuzzy C Means (FCM) classification algorithm. In particular, during the evolution of the GA, for each candidate combination of electrodes, the well-known (Σ, Φ, Ω) features were calculated and were evaluated by means of the FCM method. The proposed methodology provided a combination of 8 electrodes, with classification accuracy 93.8%. Thus, GA can be the basis for the selection of features that discriminate ERP recordings of observations of correct or incorrect actions.

  10. Visualizing Validation of Protein Surface Classifiers.

    PubMed

    Sarikaya, A; Albers, D; Mitchell, J; Gleicher, M

    2014-06-01

    Many bioinformatics applications construct classifiers that are validated in experiments that compare their results to known ground truth over a corpus. In this paper, we introduce an approach for exploring the results of such classifier validation experiments, focusing on classifiers for regions of molecular surfaces. We provide a tool that allows for examining classification performance patterns over a test corpus. The approach combines a summary view that provides information about an entire corpus of molecules with a detail view that visualizes classifier results directly on protein surfaces. Rather than displaying miniature 3D views of each molecule, the summary provides 2D glyphs of each protein surface arranged in a reorderable, small-multiples grid. Each summary is specifically designed to support visual aggregation to allow the viewer to both get a sense of aggregate properties as well as the details that form them. The detail view provides a 3D visualization of each protein surface coupled with interaction techniques designed to support key tasks, including spatial aggregation and automated camera touring. A prototype implementation of our approach is demonstrated on protein surface classifier experiments. PMID:25342867

  11. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  12. A CORRECTION.

    PubMed

    Johnson, D

    1940-03-22

    IN a recently published volume on "The Origin of Submarine Canyons" the writer inadvertently credited to A. C. Veatch an excerpt from a submarine chart actually contoured by P. A. Smith, of the U. S. Coast and Geodetic Survey. The chart in question is Chart IVB of Special Paper No. 7 of the Geological Society of America entitled "Atlantic Submarine Valleys of the United States and the Congo Submarine Valley, by A. C. Veatch and P. A. Smith," and the excerpt appears as Plate III of the volume fist cited above. In view of the heavy labor involved in contouring the charts accompanying the paper by Veatch and Smith and the beauty of the finished product, it would be unfair to Mr. Smith to permit the error to go uncorrected. Excerpts from two other charts are correctly ascribed to Dr. Veatch. PMID:17839404

  13. What are the differences between Bayesian classifiers and mutual-information classifiers?

    PubMed

    Hu, Bao-Gang

    2014-02-01

    In this paper, both Bayesian and mutual-information classifiers are examined for binary classifications with or without a reject option. The general decision rules are derived for Bayesian classifiers with distinctions on error types and reject types. A formal analysis is conducted to reveal the parameter redundancy of cost terms when abstaining classifications are enforced. The redundancy implies an intrinsic problem of nonconsistency for interpreting cost terms. If no data are given to the cost terms, we demonstrate the weakness of Bayesian classifiers in class-imbalanced classifications. On the contrary, mutual-information classifiers are able to provide an objective solution from the given data, which shows a reasonable balance among error types and reject types. Numerical examples of using two types of classifiers are given for confirming the differences, including the extremely class-imbalanced cases. Finally, we briefly summarize the Bayesian and mutual-information classifiers in terms of their application advantages and disadvantages, respectively. PMID:24807026

  14. X-ray scatter correction in breast tomosynthesis with a precomputed scatter map library

    PubMed Central

    Feng, Steve Si Jia; D’Orsi, Carl J.; Newell, Mary S.; Seidel, Rebecca L.; Patel, Bhavika; Sechopoulos, Ioannis

    2014-01-01

    Purpose: To develop and evaluate the impact on lesion conspicuity of a software-based x-ray scatter correction algorithm for digital breast tomosynthesis (DBT) imaging into which a precomputed library of x-ray scatter maps is incorporated. Methods: A previously developed model of compressed breast shapes undergoing mammography based on principal component analysis (PCA) was used to assemble 540 simulated breast volumes, of different shapes and sizes, undergoing DBT. A Monte Carlo (MC) simulation was used to generate the cranio-caudal (CC) view DBT x-ray scatter maps of these volumes, which were then assembled into a library. This library was incorporated into a previously developed software-based x-ray scatter correction, and the performance of this improved algorithm was evaluated with an observer study of 40 patient cases previously classified as BI-RADS® 4 or 5, evenly divided between mass and microcalcification cases. Observers were presented with both the original images and the scatter corrected (SC) images side by side and asked to indicate their preference, on a scale from −5 to +5, in terms of lesion conspicuity and quality of diagnostic features. Scores were normalized such that a negative score indicates a preference for the original images, and a positive score indicates a preference for the SC images. Results: The scatter map library removes the time-intensive MC simulation from the application of the scatter correction algorithm. While only one in four observers preferred the SC DBT images as a whole (combined mean score = 0.169 ± 0.37, p > 0.39), all observers exhibited a preference for the SC images when the lesion examined was a mass (1.06 ± 0.45, p < 0.0001). When the lesion examined consisted of microcalcification clusters, the observers exhibited a preference for the uncorrected images (−0.725 ± 0.51, p < 0.009). Conclusions: The incorporation of the x-ray scatter map library into the scatter correction algorithm improves the efficiency

  15. A hybrid classifier fusion approach for motor unit potential classification during EMG signal decomposition.

    PubMed

    Rasheed, Sarbast; Stashuk, Daniel W; Kamel, Mohamed S

    2007-09-01

    In this paper, we propose a hybrid classifier fusion scheme for motor unit potential classification during electromyographic (EMG) signal decomposition. The scheme uses an aggregator module consisting of two stages of classifier fusion: the first at the abstract level using class labels and the second at the measurement level using confidence values. Performance of the developed system was evaluated using one set of real signals and two sets of simulated signals and was compared with the performance of the constituent base classifiers and the performance of a one-stage classifier fusion approach. Across the EMG signal data sets used and relative to the performance of base classifiers, the hybrid approach had better average classification performance overall. For the set of simulated signals of varying intensity, the hybrid classifier fusion system had on average an improved correct classification rate (CCr) (6.1%) and reduced error rate (Er) (0.4%). For the set of simulated signals of varying amounts of shape and/or firing pattern variability, the hybrid classifier fusion system had on average an improved CCr (6.2%) and reduced Er (0.9%). For real signals, the hybrid classifier fusion system had on average an improved CCr (7.5%) and reduced Er (1.7%). PMID:17867366

  16. Classifying Multiple Imbalanced Attributes in Relational Data

    NASA Astrophysics Data System (ADS)

    Ghanem, Amal S.; Venkatesh, Svetha; West, Geoff

    Real-world data are often stored as relational database systems with different numbers of significant attributes. Unfortunately, most classification techniques are proposed for learning from balanced non-relational data and mainly for classifying one single attribute. In this paper, we propose an approach for learning from relational data with the specific goal of classifying multiple imbalanced attributes. In our approach, we extend a relational modelling technique (PRMs-IM) designed for imbalanced relational learning to deal with multiple imbalanced attributes classification. We address the problem of classifying multiple imbalanced attributes by enriching the PRMs-IM with the "Bagging" classification ensemble. We evaluate our approach on real-world imbalanced student relational data and demonstrate its effectiveness in predicting student performance.

  17. Reinforcement Learning Based Artificial Immune Classifier

    PubMed Central

    Karakose, Mehmet

    2013-01-01

    One of the widely used methods for classification that is a decision-making process is artificial immune systems. Artificial immune systems based on natural immunity system can be successfully applied for classification, optimization, recognition, and learning in real-world problems. In this study, a reinforcement learning based artificial immune classifier is proposed as a new approach. This approach uses reinforcement learning to find better antibody with immune operators. The proposed new approach has many contributions according to other methods in the literature such as effectiveness, less memory cell, high accuracy, speed, and data adaptability. The performance of the proposed approach is demonstrated by simulation and experimental results using real data in Matlab and FPGA. Some benchmark data and remote image data are used for experimental results. The comparative results with supervised/unsupervised based artificial immune system, negative selection classifier, and resource limited artificial immune classifier are given to demonstrate the effectiveness of the proposed new method. PMID:23935424

  18. An automated approach to the design of decision tree classifiers

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Chin, P.; Beaudet, P.

    1980-01-01

    The classification of large dimensional data sets arising from the merging of remote sensing data with more traditional forms of ancillary data is considered. Decision tree classification, a popular approach to the problem, is characterized by the property that samples are subjected to a sequence of decision rules before they are assigned to a unique class. An automated technique for effective decision tree design which relies only on apriori statistics is presented. This procedure utilizes a set of two dimensional canonical transforms and Bayes table look-up decision rules. An optimal design at each node is derived based on the associated decision table. A procedure for computing the global probability of correct classfication is also provided. An example is given in which class statistics obtained from an actual LANDSAT scene are used as input to the program. The resulting decision tree design has an associated probability of correct classification of .76 compared to the theoretically optimum .79 probability of correct classification associated with a full dimensional Bayes classifier. Recommendations for future research are included.

  19. Dengue--how best to classify it.

    PubMed

    Srikiatkhachorn, Anon; Rothman, Alan L; Gibbons, Robert V; Sittisombut, Nopporn; Malasit, Prida; Ennis, Francis A; Nimmannitya, Suchitra; Kalayanarooj, Siripen

    2011-09-01

    Dengue has emerged as a major public health problem worldwide. Dengue virus infection causes a wide range of clinical manifestations. Since the 1970s, clinical dengue has been classified according to the World Health Organization guideline as dengue fever and dengue hemorrhagic fever. The classification has been criticized with regard to its usefulness and its applicability. In 2009, the World Health Organization issued a new guideline that classifies clinical dengue as dengue and severe dengue. The 2009 classification differs significantly from the previous classification in both conceptual and practical levels. The impacts of the new classification on clinical practice, dengue research, and public health policy are discussed. PMID:21832264

  20. A survey of decision tree classifier methodology

    NASA Technical Reports Server (NTRS)

    Safavian, S. Rasoul; Landgrebe, David

    1990-01-01

    Decision Tree Classifiers (DTC's) are used successfully in many diverse areas such as radar signal classification, character recognition, remote sensing, medical diagnosis, expert systems, and speech recognition. Perhaps, the most important feature of DTC's is their capability to break down a complex decision-making process into a collection of simpler decisions, thus providing a solution which is often easier to interpret. A survey of current methods is presented for DTC designs and the various existing issue. After considering potential advantages of DTC's over single stage classifiers, subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed.