Science.gov

Sample records for algorithm correctly classified

  1. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  2. Learning algorithms for stack filter classifiers

    SciTech Connect

    Porter, Reid B; Hush, Don; Zimmer, Beate G

    2009-01-01

    Stack Filters define a large class of increasing filter that is used widely in image and signal processing. The motivations for using an increasing filter instead of an unconstrained filter have been described as: (1) fast and efficient implementation, (2) the relationship to mathematical morphology and (3) more precise estimation with finite sample data. This last motivation is related to methods developed in machine learning and the relationship was explored in an earlier paper. In this paper we investigate this relationship by applying Stack Filters directly to classification problems. This provides a new perspective on how monotonicity constraints can help control estimation and approximation errors, and also suggests several new learning algorithms for Boolean function classifiers when they are applied to real-valued inputs.

  3. Error minimizing algorithms for nearest eighbor classifiers

    SciTech Connect

    Porter, Reid B; Hush, Don; Zimmer, G. Beate

    2011-01-03

    Stack Filters define a large class of discrete nonlinear filter first introd uced in image and signal processing for noise removal. In recent years we have suggested their application to classification problems, and investigated their relationship to other types of discrete classifiers such as Decision Trees. In this paper we focus on a continuous domain version of Stack Filter Classifiers which we call Ordered Hypothesis Machines (OHM), and investigate their relationship to Nearest Neighbor classifiers. We show that OHM classifiers provide a novel framework in which to train Nearest Neighbor type classifiers by minimizing empirical error based loss functions. We use the framework to investigate a new cost sensitive loss function that allows us to train a Nearest Neighbor type classifier for low false alarm rate applications. We report results on both synthetic data and real-world image data.

  4. Search-and-score structure learning algorithm for Bayesian network classifiers

    NASA Astrophysics Data System (ADS)

    Pernkopf, Franz; O'Leary, Paul

    2003-04-01

    This paper presents a search-and-score approach for determining the network structure of Bayesian network classifiers. A selective unrestricted Bayesian network classifier is used which in combination with the search algorithm allows simultaneous feature selection and determination of the structure of the classifier. The introduced search algorithm enables conditional exclusions of previously added attributes and/or arcs from the network classifier. Hence, this algorithm is able to correct the network structure by removing attributes and/or arcs between the nodes if they become superfluous at a later stage of the search. Classification results of selective unrestricted Bayesian network classifiers are compared to naive Bayes classifiers and tree augmented naive Bayes classifiers. Experiments on different data sets show that selective unrestricted Bayesian network classifiers achieve a better classification accuracy estimate in two domains compared to tree augmented naive Bayes classifiers, whereby in the remaining domains the performance is similar. However, the achieved network structure of selective unrestricted Bayesian network classifiers is simpler and computationally more efficient.

  5. Correcting evaluation bias of relational classifiers with network cross validation

    DOE PAGES

    Neville, Jennifer; Gallagher, Brian; Eliassi-Rad, Tina; ...

    2011-01-04

    Recently, a number of modeling techniques have been developed for data mining and machine learning in relational and network domains where the instances are not independent and identically distributed (i.i.d.). These methods specifically exploit the statistical dependencies among instances in order to improve classification accuracy. However, there has been little focus on how these same dependencies affect our ability to draw accurate conclusions about the performance of the models. More specifically, the complex link structure and attribute dependencies in relational data violate the assumptions of many conventional statistical tests and make it difficult to use these tests to assess themore » models in an unbiased manner. In this work, we examine the task of within-network classification and the question of whether two algorithms will learn models that will result in significantly different levels of performance. We show that the commonly used form of evaluation (paired t-test on overlapping network samples) can result in an unacceptable level of Type I error. Furthermore, we show that Type I error increases as (1) the correlation among instances increases and (2) the size of the evaluation set increases (i.e., the proportion of labeled nodes in the network decreases). Lastly, we propose a method for network cross-validation that combined with paired t-tests produces more acceptable levels of Type I error while still providing reasonable levels of statistical power (i.e., 1–Type II error).« less

  6. Correcting evaluation bias of relational classifiers with network cross validation

    SciTech Connect

    Neville, Jennifer; Gallagher, Brian; Eliassi-Rad, Tina; Wang, Tao

    2011-01-04

    Recently, a number of modeling techniques have been developed for data mining and machine learning in relational and network domains where the instances are not independent and identically distributed (i.i.d.). These methods specifically exploit the statistical dependencies among instances in order to improve classification accuracy. However, there has been little focus on how these same dependencies affect our ability to draw accurate conclusions about the performance of the models. More specifically, the complex link structure and attribute dependencies in relational data violate the assumptions of many conventional statistical tests and make it difficult to use these tests to assess the models in an unbiased manner. In this work, we examine the task of within-network classification and the question of whether two algorithms will learn models that will result in significantly different levels of performance. We show that the commonly used form of evaluation (paired t-test on overlapping network samples) can result in an unacceptable level of Type I error. Furthermore, we show that Type I error increases as (1) the correlation among instances increases and (2) the size of the evaluation set increases (i.e., the proportion of labeled nodes in the network decreases). Lastly, we propose a method for network cross-validation that combined with paired t-tests produces more acceptable levels of Type I error while still providing reasonable levels of statistical power (i.e., 1–Type II error).

  7. Effectiveness of feature and classifier algorithms in character recognition systems

    NASA Astrophysics Data System (ADS)

    Wilson, Charles L.

    1993-04-01

    At the first Census Optical Character Recognition Systems Conference, NIST generated accuracy data for more than character recognition systems. Most systems were tested on the recognition of isolated digits and upper and lower case alphabetic characters. The recognition experiments were performed on sample sizes of 58,000 digits, and 12,000 upper and lower case alphabetic characters. The algorithms used by the 26 conference participants included rule-based methods, image-based methods, statistical methods, and neural networks. The neural network methods included Multi-Layer Perceptron's, Learned Vector Quantitization, Neocognitrons, and cascaded neural networks. In this paper 11 different systems are compared using correlations between the answers of different systems, comparing the decrease in error rate as a function of confidence of recognition, and comparing the writer dependence of recognition. This comparison shows that methods that used different algorithms for feature extraction and recognition performed with very high levels of correlation. This is true for neural network systems, hybrid systems, and statistically based systems, and leads to the conclusion that neural networks have not yet demonstrated a clear superiority to more conventional statistical methods. Comparison of these results with the models of Vapnick (for estimation problems), MacKay (for Bayesian statistical models), Moody (for effective parameterization), and Boltzmann models (for information content) demonstrate that as the limits of training data variance are approached, all classifier systems have similar statistical properties. The limiting condition can only be approached for sufficiently rich feature sets because the accuracy limit is controlled by the available information content of the training set, which must pass through the feature extraction process prior to classification.

  8. Genetic algorithms and classifier systems: Foundations and future directions

    SciTech Connect

    Holland, J.H.

    1987-01-01

    Theoretical questions about classifier systems, with rare exceptions, apply equally to other adaptive nonlinear networks (ANNs) such as the connectionist models of cognitive psychology, the immune system, economic systems, ecologies, and genetic systems. This paper discusses pervasive properties of ANNs and the kinds of mathematics relevant to questions about these properties. It discusses relevant functional extensions of the basic classifier system and extensions of the extant mathematical theory. An appendix briefly reviews some of the key theorems about classifier systems. 6 refs.

  9. An Evaluation of Information Criteria Use for Correct Cross-Classified Random Effects Model Selection

    ERIC Educational Resources Information Center

    Beretvas, S. Natasha; Murphy, Daniel L.

    2013-01-01

    The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…

  10. Spectral areas and ratios classifier algorithm for pancreatic tissue classification using optical spectroscopy.

    PubMed

    Chandra, Malavika; Scheiman, James; Simeone, Diane; McKenna, Barbara; Purdy, Julianne; Mycek, Mary-Ann

    2010-01-01

    Pancreatic adenocarcinoma is one of the leading causes of cancer death, in part because of the inability of current diagnostic methods to reliably detect early-stage disease. We present the first assessment of the diagnostic accuracy of algorithms developed for pancreatic tissue classification using data from fiber optic probe-based bimodal optical spectroscopy, a real-time approach that would be compatible with minimally invasive diagnostic procedures for early cancer detection in the pancreas. A total of 96 fluorescence and 96 reflectance spectra are considered from 50 freshly excised tissue sites-including human pancreatic adenocarcinoma, chronic pancreatitis (inflammation), and normal tissues-on nine patients. Classification algorithms using linear discriminant analysis are developed to distinguish among tissues, and leave-one-out cross-validation is employed to assess the classifiers' performance. The spectral areas and ratios classifier (SpARC) algorithm employs a combination of reflectance and fluorescence data and has the best performance, with sensitivity, specificity, negative predictive value, and positive predictive value for correctly identifying adenocarcinoma being 85, 89, 92, and 80%, respectively.

  11. Spectral areas and ratios classifier algorithm for pancreatic tissue classification using optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Chandra, Malavika; Scheiman, James; Simeone, Diane; McKenna, Barbara; Purdy, Julianne; Mycek, Mary-Ann

    2010-01-01

    Pancreatic adenocarcinoma is one of the leading causes of cancer death, in part because of the inability of current diagnostic methods to reliably detect early-stage disease. We present the first assessment of the diagnostic accuracy of algorithms developed for pancreatic tissue classification using data from fiber optic probe-based bimodal optical spectroscopy, a real-time approach that would be compatible with minimally invasive diagnostic procedures for early cancer detection in the pancreas. A total of 96 fluorescence and 96 reflectance spectra are considered from 50 freshly excised tissue sites-including human pancreatic adenocarcinoma, chronic pancreatitis (inflammation), and normal tissues-on nine patients. Classification algorithms using linear discriminant analysis are developed to distinguish among tissues, and leave-one-out cross-validation is employed to assess the classifiers' performance. The spectral areas and ratios classifier (SpARC) algorithm employs a combination of reflectance and fluorescence data and has the best performance, with sensitivity, specificity, negative predictive value, and positive predictive value for correctly identifying adenocarcinoma being 85, 89, 92, and 80%, respectively.

  12. Indications for spine surgery: validation of an administrative coding algorithm to classify degenerative diagnoses

    PubMed Central

    Lurie, Jon D.; Tosteson, Anna N.A.; Deyo, Richard A.; Tosteson, Tor; Weinstein, James; Mirza, Sohail K.

    2014-01-01

    Study Design Retrospective analysis of Medicare claims linked to a multi-center clinical trial. Objective The Spine Patient Outcomes Research Trial (SPORT) provided a unique opportunity to examine the validity of a claims-based algorithm for grouping patients by surgical indication. SPORT enrolled patients for lumbar disc herniation, spinal stenosis, and degenerative spondylolisthesis. We compared the surgical indication derived from Medicare claims to that provided by SPORT surgeons, the “gold standard”. Summary of Background Data Administrative data are frequently used to report procedure rates, surgical safety outcomes, and costs in the management of spinal surgery. However, the accuracy of using diagnosis codes to classify patients by surgical indication has not been examined. Methods Medicare claims were link to beneficiaries enrolled in SPORT. The sensitivity and specificity of three claims-based approaches to group patients based on surgical indications were examined: 1) using the first listed diagnosis; 2) using all diagnoses independently; and 3) using a diagnosis hierarchy based on the support for fusion surgery. Results Medicare claims were obtained from 376 SPORT participants, including 21 with disc herniation, 183 with spinal stenosis, and 172 with degenerative spondylolisthesis. The hierarchical coding algorithm was the most accurate approach for classifying patients by surgical indication, with sensitivities of 76.2%, 88.1%, and 84.3% for disc herniation, spinal stenosis, and degenerative spondylolisthesis cohorts, respectively. The specificity was 98.3% for disc herniation, 83.2% for spinal stenosis, and 90.7% for degenerative spondylolisthesis. Misclassifications were primarily due to codes attributing more complex pathology to the case. Conclusion Standardized approaches for using claims data to accurately group patients by surgical indications has widespread interest. We found that a hierarchical coding approach correctly classified over 90

  13. Algorithmic Error Correction of Impedance Measuring Sensors

    PubMed Central

    Starostenko, Oleg; Alarcon-Aquino, Vicente; Hernandez, Wilmar; Sergiyenko, Oleg; Tyrsa, Vira

    2009-01-01

    This paper describes novel design concepts and some advanced techniques proposed for increasing the accuracy of low cost impedance measuring devices without reduction of operational speed. The proposed structural method for algorithmic error correction and iterating correction method provide linearization of transfer functions of the measuring sensor and signal conditioning converter, which contribute the principal additive and relative measurement errors. Some measuring systems have been implemented in order to estimate in practice the performance of the proposed methods. Particularly, a measuring system for analysis of C-V, G-V characteristics has been designed and constructed. It has been tested during technological process control of charge-coupled device CCD manufacturing. The obtained results are discussed in order to define a reasonable range of applied methods, their utility, and performance. PMID:22303177

  14. Adaptive phase aberration correction based on imperialist competitive algorithm.

    PubMed

    Yazdani, R; Hajimahmoodzadeh, M; Fallah, H R

    2014-01-01

    We investigate numerically the feasibility of phase aberration correction in a wavefront sensorless adaptive optical system, based on the imperialist competitive algorithm (ICA). Considering a 61-element deformable mirror (DM) and the Strehl ratio as the cost function of ICA, this algorithm is employed to search the optimum surface profile of DM for correcting the phase aberrations in a solid-state laser system. The correction results show that ICA is a powerful correction algorithm for static or slowly changing phase aberrations in optical systems, such as solid-state lasers. The correction capability and the convergence speed of this algorithm are compared with those of the genetic algorithm (GA) and stochastic parallel gradient descent (SPGD) algorithm. The results indicate that these algorithms have almost the same correction capability. Also, ICA and GA are almost the same in convergence speed and SPGD is the fastest of these algorithms.

  15. Classifying algorithms for SIFT-MS technology and medical diagnosis.

    PubMed

    Moorhead, K T; Lee, D; Chase, J G; Moot, A R; Ledingham, K M; Scotter, J; Allardyce, R A; Senthilmohan, S T; Endre, Z

    2008-03-01

    Selected Ion Flow Tube-Mass Spectrometry (SIFT-MS) is an analytical technique for real-time quantification of trace gases in air or breath samples. SIFT-MS system thus offers unique potential for early, rapid detection of disease states. Identification of volatile organic compound (VOC) masses that contribute strongly towards a successful classification clearly highlights potential new biomarkers. A method utilising kernel density estimates is thus presented for classifying unknown samples. It is validated in a simple known case and a clinical setting before-after dialysis. The simple case with nitrogen in Tedlar bags returned a 100% success rate, as expected. The clinical proof-of-concept with seven tests on one patient had an ROC curve area of 0.89. These results validate the method presented and illustrate the emerging clinical potential of this technology.

  16. Combining classifiers generated by multi-gene genetic programming for protein fold recognition using genetic algorithm.

    PubMed

    Bardsiri, Mahshid Khatibi; Eftekhari, Mahdi; Mousavi, Reza

    2015-01-01

    In this study the problem of protein fold recognition, that is a classification task, is solved via a hybrid of evolutionary algorithms namely multi-gene Genetic Programming (GP) and Genetic Algorithm (GA). Our proposed method consists of two main stages and is performed on three datasets taken from the literature. Each dataset contains different feature groups and classes. In the first step, multi-gene GP is used for producing binary classifiers based on various feature groups for each class. Then, different classifiers obtained for each class are combined via weighted voting so that the weights are determined through GA. At the end of the first step, there is a separate binary classifier for each class. In the second stage, the obtained binary classifiers are combined via GA weighting in order to generate the overall classifier. The final obtained classifier is superior to the previous works found in the literature in terms of classification accuracy.

  17. [An Algorithm for Correcting Fetal Heart Rate Baseline].

    PubMed

    Li, Xiaodong; Lu, Yaosheng

    2015-10-01

    Fetal heart rate (FHR) baseline estimation is of significance for the computerized analysis of fetal heart rate and the assessment of fetal state. In our work, a fetal heart rate baseline correction algorithm was presented to make the existing baseline more accurate and fit to the tracings. Firstly, the deviation of the existing FHR baseline was found and corrected. And then a new baseline was obtained finally after treatment with some smoothing methods. To assess the performance of FHR baseline correction algorithm, a new FHR baseline estimation algorithm that combined baseline estimation algorithm and the baseline correction algorithm was compared with two existing FHR baseline estimation algorithms. The results showed that the new FHR baseline estimation algorithm did well in both accuracy and efficiency. And the results also proved the effectiveness of the FHR baseline correction algorithm.

  18. Ensemble of classifiers to improve accuracy of the CLIP4 machine-learning algorithm

    NASA Astrophysics Data System (ADS)

    Kurgan, Lukasz; Cios, Krzysztof J.

    2002-03-01

    Machine learning, one of the data mining and knowledge discovery tools, addresses automated extraction of knowledge from data, expressed in the form of production rules. The paper describes a method for improving accuracy of rules generated by inductive machine learning algorithm by generating the ensemble of classifiers. It generates multiple classifiers using the CLIP4 algorithm and combines them using a voting scheme. The generation of a set of different classifiers is performed by injecting controlled randomness into the learning algorithm, but without modifying the training data set. Our method is based on the characteristic properties of the CLIP4 algorithm. The case study of the SPECT heart image analysis system is used as an example where improving accuracy is very important. Benchmarking results on other well-known machine learning datasets, and comparison with an algorithm that uses boosting technique to improve its accuracy are also presented. The proposed method always improves the accuracy of the results when compared with the accuracy of a single classifier generated by the CLIP4 algorithm, as opposed to using boosting. The obtained results are comparable with other state-of-the-art machine learning algorithms.

  19. Comparison of Genetic Algorithm, Particle Swarm Optimization and Biogeography-based Optimization for Feature Selection to Classify Clusters of Microcalcifications

    NASA Astrophysics Data System (ADS)

    Khehra, Baljit Singh; Pharwaha, Amar Partap Singh

    2016-06-01

    Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.

  20. Lung Cancer Classification Employing Proposed Real Coded Genetic Algorithm Based Radial Basis Function Neural Network Classifier

    PubMed Central

    Deepa, S. N.

    2016-01-01

    A proposed real coded genetic algorithm based radial basis function neural network classifier is employed to perform effective classification of healthy and cancer affected lung images. Real Coded Genetic Algorithm (RCGA) is proposed to overcome the Hamming Cliff problem encountered with the Binary Coded Genetic Algorithm (BCGA). Radial Basis Function Neural Network (RBFNN) classifier is chosen as a classifier model because of its Gaussian Kernel function and its effective learning process to avoid local and global minima problem and enable faster convergence. This paper specifically focused on tuning the weights and bias of RBFNN classifier employing the proposed RCGA. The operators used in RCGA enable the algorithm flow to compute weights and bias value so that minimum Mean Square Error (MSE) is obtained. With both the lung healthy and cancer images from Lung Image Database Consortium (LIDC) database and Real time database, it is noted that the proposed RCGA based RBFNN classifier has performed effective classification of the healthy lung tissues and that of the cancer affected lung nodules. The classification accuracy computed using the proposed approach is noted to be higher in comparison with that of the classifiers proposed earlier in the literatures. PMID:28050198

  1. Lung Cancer Classification Employing Proposed Real Coded Genetic Algorithm Based Radial Basis Function Neural Network Classifier.

    PubMed

    Selvakumari Jeya, I Jasmine; Deepa, S N

    2016-01-01

    A proposed real coded genetic algorithm based radial basis function neural network classifier is employed to perform effective classification of healthy and cancer affected lung images. Real Coded Genetic Algorithm (RCGA) is proposed to overcome the Hamming Cliff problem encountered with the Binary Coded Genetic Algorithm (BCGA). Radial Basis Function Neural Network (RBFNN) classifier is chosen as a classifier model because of its Gaussian Kernel function and its effective learning process to avoid local and global minima problem and enable faster convergence. This paper specifically focused on tuning the weights and bias of RBFNN classifier employing the proposed RCGA. The operators used in RCGA enable the algorithm flow to compute weights and bias value so that minimum Mean Square Error (MSE) is obtained. With both the lung healthy and cancer images from Lung Image Database Consortium (LIDC) database and Real time database, it is noted that the proposed RCGA based RBFNN classifier has performed effective classification of the healthy lung tissues and that of the cancer affected lung nodules. The classification accuracy computed using the proposed approach is noted to be higher in comparison with that of the classifiers proposed earlier in the literatures.

  2. GACEM: Genetic Algorithm Based Classifier Ensemble in a Multi-sensor System

    PubMed Central

    Xu, Rongwu; He, Lin

    2008-01-01

    Multi-sensor systems (MSS) have been increasingly applied in pattern classification while searching for the optimal classification framework is still an open problem. The development of the classifier ensemble seems to provide a promising solution. The classifier ensemble is a learning paradigm where many classifiers are jointly used to solve a problem, which has been proven an effective method for enhancing the classification ability. In this paper, by introducing the concept of Meta-feature (MF) and Trans-function (TF) for describing the relationship between the nature and the measurement of the observed phenomenon, classification in a multi-sensor system can be unified in the classifier ensemble framework. Then an approach called Genetic Algorithm based Classifier Ensemble in Multi-sensor system (GACEM) is presented, where a genetic algorithm is utilized for optimization of both the selection of features subset and the decision combination simultaneously. GACEM trains a number of classifiers based on different combinations of feature vectors at first and then selects the classifiers whose weight is higher than the pre-set threshold to make up the ensemble. An empirical study shows that, compared with the conventional feature-level voting and decision-level voting, not only can GACEM achieve better and more robust performance, but also simplify the system markedly. PMID:27873866

  3. Learning Likelihoods for Labeling (L3): A General Multi-Classifier Segmentation Algorithm

    PubMed Central

    Weisenfeld, Neil I.; Warfield, Simon K.

    2013-01-01

    PURPOSE To develop an MRI segmentation method for brain tissues, regions, and substructures that yields improved classification accuracy. Current brain segmentation strategies include two complementary strategies: multi-spectral classification and multi-template label fusion with individual strengths and weaknesses. METHODS We propose here a novel multi-classifier fusion algorithm with the advantages of both types of segmentation strategy. We illustrate and validate this algorithm using a group of 14 expertly hand-labeled images. RESULTS Our method generated segmentations of cortical and subcortical structures that were more similar to hand-drawn segmentations than majority vote label fusion or a recently published intensity/label fusion method. CONCLUSIONS We have presented a novel, general segmentation algorithm with the advantages of both statistical classifiers and label fusion techniques. PMID:22003715

  4. A microwave radiometer weather-correcting sea ice algorithm

    NASA Technical Reports Server (NTRS)

    Walters, J. M.; Ruf, C.; Swift, C. T.

    1987-01-01

    A new algorithm for estimating the proportions of the multiyear and first-year sea ice types under variable atmospheric and sea surface conditions is presented, which uses all six channels of the SMMR. The algorithm is specifically tuned to derive sea ice parameters while accepting error in the auxiliary parameters of surface temperature, ocean surface wind speed, atmospheric water vapor, and cloud liquid water content. Not only does the algorithm naturally correct for changes in these weather conditions, but it retrieves sea ice parameters to the extent that gross errors in atmospheric conditions propagate only small errors into the sea ice retrievals. A preliminary evaluation indicates that the weather-correcting algorithm provides a better data product than the 'UMass-AES' algorithm, whose quality has been cross checked with independent surface observations. The algorithm performs best when the sea ice concentration is less than 20 percent.

  5. Stochastic Formal Correctness of Numerical Algorithms

    NASA Technical Reports Server (NTRS)

    Daumas, Marc; Lester, David; Martin-Dorel, Erik; Truffert, Annick

    2009-01-01

    We provide a framework to bound the probability that accumulated errors were never above a given threshold on numerical algorithms. Such algorithms are used for example in aircraft and nuclear power plants. This report contains simple formulas based on Levy's and Markov's inequalities and it presents a formal theory of random variables with a special focus on producing concrete results. We selected four very common applications that fit in our framework and cover the common practices of systems that evolve for a long time. We compute the number of bits that remain continuously significant in the first two applications with a probability of failure around one out of a billion, where worst case analysis considers that no significant bit remains. We are using PVS as such formal tools force explicit statement of all hypotheses and prevent incorrect uses of theorems.

  6. An efficient fitness function in genetic algorithm classifier for Landuse recognition on satellite images.

    PubMed

    Yang, Ming-Der; Yang, Yeh-Fen; Su, Tung-Ching; Huang, Kai-Siang

    2014-01-01

    Genetic algorithm (GA) is designed to search the optimal solution via weeding out the worse gene strings based on a fitness function. GA had demonstrated effectiveness in solving the problems of unsupervised image classification, one of the optimization problems in a large domain. Many indices or hybrid algorithms as a fitness function in a GA classifier are built to improve the classification accuracy. This paper proposes a new index, DBFCMI, by integrating two common indices, DBI and FCMI, in a GA classifier to improve the accuracy and robustness of classification. For the purpose of testing and verifying DBFCMI, well-known indices such as DBI, FCMI, and PASI are employed as well for comparison. A SPOT-5 satellite image in a partial watershed of Shihmen reservoir is adopted as the examined material for landuse classification. As a result, DBFCMI acquires higher overall accuracy and robustness than the rest indices in unsupervised classification.

  7. Adaptive bad pixel correction algorithm for IRFPA based on PCNN

    NASA Astrophysics Data System (ADS)

    Leng, Hanbing; Zhou, Zuofeng; Cao, Jianzhong; Yi, Bo; Yan, Aqi; Zhang, Jian

    2013-10-01

    Bad pixels and response non-uniformity are the primary obstacles when IRFPA is used in different thermal imaging systems. The bad pixels of IRFPA include fixed bad pixels and random bad pixels. The former is caused by material or manufacture defect and their positions are always fixed, the latter is caused by temperature drift and their positions are always changing. Traditional radiometric calibration-based bad pixel detection and compensation algorithm is only valid to the fixed bad pixels. Scene-based bad pixel correction algorithm is the effective way to eliminate these two kinds of bad pixels. Currently, the most used scene-based bad pixel correction algorithm is based on adaptive median filter (AMF). In this algorithm, bad pixels are regarded as image noise and then be replaced by filtered value. However, missed correction and false correction often happens when AMF is used to handle complex infrared scenes. To solve this problem, a new adaptive bad pixel correction algorithm based on pulse coupled neural networks (PCNN) is proposed. Potential bad pixels are detected by PCNN in the first step, then image sequences are used periodically to confirm the real bad pixels and exclude the false one, finally bad pixels are replaced by the filtered result. With the real infrared images obtained from a camera, the experiment results show the effectiveness of the proposed algorithm.

  8. Algorithmic scatter correction in dual-energy digital mammography

    SciTech Connect

    Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.; Lau, Beverly A.; Chan, Suk-tak; Zhang, Lei

    2013-11-15

    Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In this paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of

  9. Classifying spatially heterogeneous wetland communities using machine learning algorithms and spectral and textural features.

    PubMed

    Szantoi, Zoltan; Escobedo, Francisco J; Abd-Elrahman, Amr; Pearlstine, Leonard; Dewitt, Bon; Smith, Scot

    2015-05-01

    Mapping of wetlands (marsh vs. swamp vs. upland) is a common remote sensing application.Yet, discriminating between similar freshwater communities such as graminoid/sedge fromremotely sensed imagery is more difficult. Most of this activity has been performed using medium to low resolution imagery. There are only a few studies using highspatial resolutionimagery and machine learning image classification algorithms for mapping heterogeneouswetland plantcommunities. This study addresses this void by analyzing whether machine learning classifierssuch as decisiontrees (DT) and artificial neural networks (ANN) can accurately classify graminoid/sedgecommunities usinghigh resolution aerial imagery and image texture data in the Everglades National Park, Florida.In addition tospectral bands, the normalized difference vegetation index, and first- and second-order texturefeatures derivedfrom the near-infrared band were analyzed. Classifier accuracies were assessed using confusiontablesand the calculated kappa coefficients of the resulting maps. The results indicated that an ANN(multilayerperceptron based on backpropagation) algorithm produced a statistically significantly higheraccuracy(82.04%) than the DT (QUEST) algorithm (80.48%) or the maximum likelihood (80.56%)classifier (α<0.05). Findings show that using multiple window sizes provided the best results. First-ordertexture featuresalso provided computational advantages and results that were not significantly different fromthose usingsecond-order texture features.

  10. Algorithms for Image Analysis and Combination of Pattern Classifiers with Application to Medical Diagnosis

    NASA Astrophysics Data System (ADS)

    Georgiou, Harris

    2009-10-01

    Medical Informatics and the application of modern signal processing in the assistance of the diagnostic process in medical imaging is one of the more recent and active research areas today. This thesis addresses a variety of issues related to the general problem of medical image analysis, specifically in mammography, and presents a series of algorithms and design approaches for all the intermediate levels of a modern system for computer-aided diagnosis (CAD). The diagnostic problem is analyzed with a systematic approach, first defining the imaging characteristics and features that are relevant to probable pathology in mammo-grams. Next, these features are quantified and fused into new, integrated radio-logical systems that exhibit embedded digital signal processing, in order to improve the final result and minimize the radiological dose for the patient. In a higher level, special algorithms are designed for detecting and encoding these clinically interest-ing imaging features, in order to be used as input to advanced pattern classifiers and machine learning models. Finally, these approaches are extended in multi-classifier models under the scope of Game Theory and optimum collective deci-sion, in order to produce efficient solutions for combining classifiers with minimum computational costs for advanced diagnostic systems. The material covered in this thesis is related to a total of 18 published papers, 6 in scientific journals and 12 in international conferences.

  11. Genetic algorithm-based classifiers fusion for multisensor activity recognition of elderly people.

    PubMed

    Chernbumroong, Saisakul; Cang, Shuang; Yu, Hongnian

    2015-01-01

    Activity recognition of an elderly person can be used to provide information and intelligent services to health care professionals, carers, elderly people, and their families so that the elderly people can remain at homes independently. This study investigates the use and contribution of wrist-worn multisensors for activity recognition. We found that accelerometers are the most important sensors and heart rate data can be used to boost classification of activities with diverse heart rates. We propose a genetic algorithm-based fusion weight selection (GAFW) approach which utilizes GA to find fusion weights. For all possible classifier combinations and fusion methods, the study shows that 98% of times GAFW can achieve equal or higher accuracy than the best classifier within the group.

  12. Algorithm for Atmospheric Corrections of Aircraft and Satellite Imagery

    NASA Technical Reports Server (NTRS)

    Fraser, Robert S.; Kaufman, Yoram J.; Ferrare, Richard A.; Mattoo, Shana

    1989-01-01

    A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 micron. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.

  13. Algorithm for atmospheric corrections of aircraft and satellite imagery

    NASA Technical Reports Server (NTRS)

    Fraser, R. S.; Ferrare, R. A.; Kaufman, Y. J.; Markham, B. L.; Mattoo, S.

    1992-01-01

    A simple and fast atmospheric correction algorithm is described which is used to correct radiances of scattered sunlight measured by aircraft and/or satellite above a uniform surface. The atmospheric effect, the basic equations, a description of the computational procedure, and a sensitivity study are discussed. The program is designed to take the measured radiances, view and illumination directions, and the aerosol and gaseous absorption optical thickness to compute the radiance just above the surface, the irradiance on the surface, and surface reflectance. Alternatively, the program will compute the upward radiance at a specific altitude for a given surface reflectance, view and illumination directions, and aerosol and gaseous absorption optical thickness. The algorithm can be applied for any view and illumination directions and any wavelength in the range 0.48 micron to 2.2 microns. The relation between the measured radiance and surface reflectance, which is expressed as a function of atmospheric properties and measurement geometry, is computed using a radiative transfer routine. The results of the computations are presented in a table which forms the basis of the correction algorithm. The algorithm can be used for atmospheric corrections in the presence of a rural aerosol. The sensitivity of the derived surface reflectance to uncertainties in the model and input data is discussed.

  14. Using Hierarchical Time Series Clustering Algorithm and Wavelet Classifier for Biometric Voice Classification

    PubMed Central

    Fong, Simon

    2012-01-01

    Voice biometrics has a long history in biosecurity applications such as verification and identification based on characteristics of the human voice. The other application called voice classification which has its important role in grouping unlabelled voice samples, however, has not been widely studied in research. Lately voice classification is found useful in phone monitoring, classifying speakers' gender, ethnicity and emotion states, and so forth. In this paper, a collection of computational algorithms are proposed to support voice classification; the algorithms are a combination of hierarchical clustering, dynamic time wrap transform, discrete wavelet transform, and decision tree. The proposed algorithms are relatively more transparent and interpretable than the existing ones, though many techniques such as Artificial Neural Networks, Support Vector Machine, and Hidden Markov Model (which inherently function like a black box) have been applied for voice verification and voice identification. Two datasets, one that is generated synthetically and the other one empirically collected from past voice recognition experiment, are used to verify and demonstrate the effectiveness of our proposed voice classification algorithm. PMID:22619492

  15. Algorithm for correcting optimization convergence errors in Eclipse.

    PubMed

    Zacarias, Albert S; Mills, Michael D

    2009-10-14

    IMRT plans generated in Eclipse use a fast algorithm to evaluate dose for optimization and a more accurate algorithm for a final dose calculation, the Analytical Anisotropic Algorithm. The use of a fast optimization algorithm introduces optimization convergence errors into an IMRT plan. Eclipse has a feature where optimization may be performed on top of an existing base plan. This feature allows for the possibility of arriving at a recursive solution to optimization that relies on the accuracy of the final dose calculation algorithm and not the optimizer algorithm. When an IMRT plan is used as a base plan for a second optimization, the second optimization can compensate for heterogeneity and modulator errors in the original base plan. Plans with the same field arrangement as the initial base plan may be added together by adding the initial plan optimal fluence to the dose correcting plan optimal fluence.A simple procedure to correct for optimization errors is presented that may be implemented in the Eclipse treatment planning system, along with an Excel spreadsheet to add optimized fluence maps together.

  16. Comparison of inhomogeneity correction algorithms in small photon fields.

    PubMed

    Jones, Andrew O; Das, Indra J

    2005-03-01

    Algorithms such as convolution superposition, Batho, and equivalent pathlength which were originally developed and validated for conventional treatments under conditions of electronic equilibrium using relatively large fields greater than 5 x 5 cm2 are routinely employed for inhomogeneity corrections. Modern day treatments using intensity modulated radiation therapy employ small beamlets characterized by the resolution of the multileaf collimator. These beamlets, in general, do not provide electronic equilibrium even in a homogeneous medium, and these effects are exaggerated in media with inhomogenieties. Monte Carlo simulations are becoming a tool of choice in understanding the dosimetry of small photon fields as they encounter low density media. In this study, depth dose data from the Monte Carlo simulations are compared to the results of the convolution superposition, Batho, and equivalent pathlength algorithms. The central axis dose within the low-density inhomogeneity as calculated by Monte Carlo simulation and convolution superposition decreases for small field sizes whereas it increases using the Batho and equivalent pathlength algorithms. The dose perturbation factor (DPF) is defined as the ratio of dose to a point within the inhomogeneity to the same point in a homogeneous phantom. The dose correction factor is defined as the ratio of dose calculated by an algorithm at a point to the Monte Carlo derived dose at the same point, respectively. DPF is noted to be significant for small fields and low density for all algorithms. Comparisons of the algorithms with Monte Carlo simulations is reflected in the DCF, which is close to 1.0 for the convolution-superposition algorithm. The Batho and equivalent pathlength algorithms differ significantly from Monte Carlo simulation for most field sizes and densities. Convolution superposition shows better agreement with Monte Carlo data versus the Batho or equivalent pathlength corrections. As the field size increases the

  17. A new training algorithm using artificial neural networks to classify gender-specific dynamic gait patterns.

    PubMed

    Andrade, Andre; Costa, Marcelo; Paolucci, Leopoldo; Braga, Antônio; Pires, Flavio; Ugrinowitsch, Herbert; Menzel, Hans-Joachim

    2015-01-01

    The aim of this study was to present a new training algorithm using artificial neural networks called multi-objective least absolute shrinkage and selection operator (MOBJ-LASSO) applied to the classification of dynamic gait patterns. The movement pattern is identified by 20 characteristics from the three components of the ground reaction force which are used as input information for the neural networks in gender-specific gait classification. The classification performance between MOBJ-LASSO (97.4%) and multi-objective algorithm (MOBJ) (97.1%) is similar, but the MOBJ-LASSO algorithm achieved more improved results than the MOBJ because it is able to eliminate the inputs and automatically select the parameters of the neural network. Thus, it is an effective tool for data mining using neural networks. From 20 inputs used for training, MOBJ-LASSO selected the first and second peaks of the vertical force and the force peak in the antero-posterior direction as the variables that classify the gait patterns of the different genders.

  18. Modification of the random forest algorithm to avoid statistical dependence problems when classifying remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Cánovas-García, Fulgencio; Alonso-Sarría, Francisco; Gomariz-Castillo, Francisco; Oñate-Valdivieso, Fernando

    2017-06-01

    Random forest is a classification technique widely used in remote sensing. One of its advantages is that it produces an estimation of classification accuracy based on the so called out-of-bag cross-validation method. It is usually assumed that such estimation is not biased and may be used instead of validation based on an external data-set or a cross-validation external to the algorithm. In this paper we show that this is not necessarily the case when classifying remote sensing imagery using training areas with several pixels or objects. According to our results, out-of-bag cross-validation clearly overestimates accuracy, both overall and per class. The reason is that, in a training patch, pixels or objects are not independent (from a statistical point of view) of each other; however, they are split by bootstrapping into in-bag and out-of-bag as if they were really independent. We believe that putting whole patch, rather than pixels/objects, in one or the other set would produce a less biased out-of-bag cross-validation. To deal with the problem, we propose a modification of the random forest algorithm to split training patches instead of the pixels (or objects) that compose them. This modified algorithm does not overestimate accuracy and has no lower predictive capability than the original. When its results are validated with an external data-set, the accuracy is not different from that obtained with the original algorithm. We analysed three remote sensing images with different classification approaches (pixel and object based); in the three cases reported, the modification we propose produces a less biased accuracy estimation.

  19. Embedded EMD algorithm within an FPGA-based design to classify nonlinear SDOF systems

    NASA Astrophysics Data System (ADS)

    Jones, Jonathan D.; Pei, Jin-Song; Wright, Joseph P.; Tull, Monte P.

    2010-04-01

    Compared with traditional microprocessor-based systems, rapidly advancing field-programmable gate array (FPGA) technology offers a more powerful, efficient and flexible hardware platform. An FPGA and microprocessor (i.e., hardware and software) co-design is developed to classify three types of nonlinearities (including linear, hardening and softening) of a single-degree-of-freedom (SDOF) system subjected to free vibration. This significantly advances the team's previous work on using FPGAs for wireless structural health monitoring. The classification is achieved by embedding two important algorithms - empirical mode decomposition (EMD) and backbone curve analysis. Design considerations to embed EMD in FPGA and microprocessor are discussed. In particular, the implementation of cubic spline fitting and the challenges encountered using both hardware and software environments are discussed. The backbone curve technique is fully implemented within the FPGA hardware and used to extract instantaneous characteristics from the uniformly distributed data sets produced by the EMD algorithm as presented in a previous SPIE conference by the team. An off-the-shelf high-level abstraction tool along with the MATLAB/Simulink environment is utilized to manage the overall FPGA and microprocessor co-design. Given the limited computational resources of an embedded system, we strive for a balance between the maximization of computational efficiency and minimization of resource utilization. The value of this study lies well beyond merely programming existing algorithms in hardware and software. Among others, extensive and intensive judgment is exercised involving experiences and insights with these algorithms, which renders processed instantaneous characteristics of the signals that are well-suited for wireless transmission.

  20. A kinetic model-based algorithm to classify NGS short reads by their allele origin.

    PubMed

    Marinoni, Andrea; Rizzo, Ettore; Limongelli, Ivan; Gamba, Paolo; Bellazzi, Riccardo

    2015-02-01

    Genotyping Next Generation Sequencing (NGS) data of a diploid genome aims to assign the zygosity of identified variants through comparison with a reference genome. Current methods typically employ probabilistic models that rely on the pileup of bases at each locus and on a priori knowledge. We present a new algorithm, called Kimimila (KInetic Modeling based on InforMation theory to Infer Labels of Alleles), which is able to assign reads to alleles by using a distance geometry approach and to infer the variant genotypes accurately, without any kind of assumption. The performance of the model has been assessed on simulated and real data of the 1000 Genomes Project and the results have been compared with several commonly used genotyping methods, i.e., GATK, Samtools, VarScan, FreeBayes and Atlas2. Despite our algorithm does not make use of a priori knowledge, the percentage of correctly genotyped variants is comparable to these algorithms. Furthermore, our method allows the user to split the reads pool depending on the inferred allele origin.

  1. Achieving Algorithmic Resilience for Temporal Integration through Spectral Deferred Corrections

    SciTech Connect

    Grout, R. W.; Kolla, H.; Minion, M. L.; Bell, J. B.

    2015-04-06

    Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.

  2. Looking for Childhood-Onset Schizophrenia: Diagnostic Algorithms for Classifying Children and Adolescents with Psychosis

    PubMed Central

    Kataria, Rachna; Gochman, Peter; Dasgupta, Abhijit; Malley, James D.; Rapoport, Judith; Gogtay, Nitin

    2014-01-01

    Abstract Objective: Among children <13 years of age with persistent psychosis and contemporaneous decline in functioning, it is often difficult to determine if the diagnosis of childhood onset schizophrenia (COS) is warranted. Despite decades of experience, we have up to a 44% false positive screening diagnosis rate among patients identified as having probable or possible COS; final diagnoses are made following inpatient hospitalization and medication washout. Because our lengthy medication-free observation is not feasible in clinical practice, we constructed diagnostic classifiers using screening data to assist clinicians practicing in the community or academic centers. Methods: We used cross-validation, logistic regression, receiver operating characteristic (ROC) analysis, and random forest to determine the best algorithm for classifying COS (n=85) versus histories of psychosis and impaired functioning in children and adolescents who, at screening, were considered likely to have COS, but who did not meet diagnostic criteria for schizophrenia after medication washout and inpatient observation (n=53). We used demographics, clinical history measures, intelligence quotient (IQ) and screening rating scales, and number of typical and atypical antipsychotic medications as our predictors. Results: Logistic regression models using nine, four, and two predictors performed well with positive predictive values>90%, overall accuracy>77%, and areas under the curve (AUCs)>86%. Conclusions: COS can be distinguished from alternate disorders with psychosis in children and adolescents; greater levels of positive and negative symptoms and lower levels of depression combine to make COS more likely. We include a worksheet so that clinicians in the community and academic centers can predict the probability that a young patient may be schizophrenic, using only two ratings. PMID:25019955

  3. Comparative analysis of instance selection algorithms for instance-based classifiers in the context of medical decision support

    NASA Astrophysics Data System (ADS)

    Mazurowski, Maciej A.; Malof, Jordan M.; Tourassi, Georgia D.

    2011-01-01

    When constructing a pattern classifier, it is important to make best use of the instances (a.k.a. cases, examples, patterns or prototypes) available for its development. In this paper we present an extensive comparative analysis of algorithms that, given a pool of previously acquired instances, attempt to select those that will be the most effective to construct an instance-based classifier in terms of classification performance, time efficiency and storage requirements. We evaluate seven previously proposed instance selection algorithms and compare their performance to simple random selection of instances. We perform the evaluation using k-nearest neighbor classifier and three classification problems: one with simulated Gaussian data and two based on clinical databases for breast cancer detection and diagnosis, respectively. Finally, we evaluate the impact of the number of instances available for selection on the performance of the selection algorithms and conduct initial analysis of the selected instances. The experiments show that for all investigated classification problems, it was possible to reduce the size of the original development dataset to less than 3% of its initial size while maintaining or improving the classification performance. Random mutation hill climbing emerges as the superior selection algorithm. Furthermore, we show that some previously proposed algorithms perform worse than random selection. Regarding the impact of the number of instances available for the classifier development on the performance of the selection algorithms, we confirm that the selection algorithms are generally more effective as the pool of available instances increases. In conclusion, instance selection is generally beneficial for instance-based classifiers as it can improve their performance, reduce their storage requirements and improve their response time. However, choosing the right selection algorithm is crucial.

  4. Scattering correction through a space-variant blind deconvolution algorithm

    NASA Astrophysics Data System (ADS)

    Benno, Koberstein-Schwarz; Lars, Omlor; Tobias, Schmitt-Manderbach; Timo, Mappes; Vasilis, Ntziachristos

    2016-09-01

    Scattering within biological samples limits the imaging depth and the resolution in microscopy. We present a prior and regularization approach for blind deconvolution algorithms to correct the influence of scattering to increase the imaging depth and resolution. The effect of the prior is demonstrated on a three-dimensional image stack of a zebrafish embryo captured with a selective plane illumination microscope. Blind deconvolution algorithms model the recorded image as a convolution between the distribution of fluorophores and a point spread function (PSF). Our prior uses image information from adjacent z-planes to estimate the unknown blur in tissue. The increased size of the PSF due to the cascading effect of scattering in deeper tissue is accounted for by a depth adaptive regularizer model. In a zebrafish sample, we were able to extend the point in depth, where scattering has a significant effect on the image quality by around 30 μm.

  5. Aerosol Retrieval and Atmospheric Correction Algorithms for EPIC

    NASA Technical Reports Server (NTRS)

    Wang, Yujie; Lyapustin, Alexei; Marshak, Alexander; Korkin, Sergey; Herman, Jay

    2011-01-01

    EPIC is a multi-spectral imager onboard planned Deep Space Climate ObserVatoRy (DSCOVR) designed for observations of the full illuminated disk of the Earth with high temporal and coarse spatial resolution (10 km) from Lagrangian L1 point. During the course of the day, EPIC will view the same Earth surface area in the full range of solar and view zenith angles at equator with fixed scattering angle near the backscattering direction. This talk will describe a new aerosol retrieval/atmospheric correction algorithm developed for EPIC and tested with EPIC Simulator data. This algorithm uses the time series approach and consists of two stages: the first stage is designed to periodically re-initialize the surface spectral bidirectional reflectance (BRF) on stable low AOD days. Such days can be selected based on the same measured reflectance between the morning and afternoon reciprocal view geometries of EPIC. On the second stage, the algorithm will monitor the diurnal cycle of aerosol optical depth and fine mode fraction based on the known spectral surface BRF. Testing of the developed algorithm with simulated EPIC data over continental USA showed a good accuracy of AOD retrievals (10-20%) except over very bright surfaces.

  6. Combining support vector machine with genetic algorithm to classify ultrasound breast tumor images.

    PubMed

    Wu, Wen-Jie; Lin, Shih-Wei; Moon, Woo Kyung

    2012-12-01

    To promote the classification accuracy and decrease the time of extracting features and finding (near) optimal classification model of an ultrasound breast tumor image computer-aided diagnosis system, we propose an approach which simultaneously combines feature selection and parameter setting in this study. In our approach ultrasound breast tumors were segmented automatically by a level set method. The auto-covariance texture features and morphologic features were first extracted following the use of a genetic algorithm to detect significant features and determine the near-optimal parameters for the support vector machine (SVM) to identify the tumor as benign or malignant. The proposed CAD system can differentiate benign from malignant breast tumors with high accuracy and short feature extraction time. According to the experimental results, the accuracy of the proposed CAD system for classifying breast tumors is 95.24% and the computing time of the proposed system for calculating features of all breast tumor images is only 8% of that of a system without feature selection. Furthermore, the time of finding (near) optimal classification model is significantly than that of grid search. It is therefore clinically useful in reducing the number of biopsies of benign lesions and offers a second reading to assist inexperienced physicians in avoiding misdiagnosis.

  7. The Construction of Support Vector Machine Classifier Using the Firefly Algorithm

    PubMed Central

    Chao, Chih-Feng; Horng, Ming-Huwi

    2015-01-01

    The setting of parameters in the support vector machines (SVMs) is very important with regard to its accuracy and efficiency. In this paper, we employ the firefly algorithm to train all parameters of the SVM simultaneously, including the penalty parameter, smoothness parameter, and Lagrangian multiplier. The proposed method is called the firefly-based SVM (firefly-SVM). This tool is not considered the feature selection, because the SVM, together with feature selection, is not suitable for the application in a multiclass classification, especially for the one-against-all multiclass SVM. In experiments, binary and multiclass classifications are explored. In the experiments on binary classification, ten of the benchmark data sets of the University of California, Irvine (UCI), machine learning repository are used; additionally the firefly-SVM is applied to the multiclass diagnosis of ultrasonic supraspinatus images. The classification performance of firefly-SVM is also compared to the original LIBSVM method associated with the grid search method and the particle swarm optimization based SVM (PSO-SVM). The experimental results advocate the use of firefly-SVM to classify pattern classifications for maximum accuracy. PMID:25802511

  8. An environment-adaptive management algorithm for hearing-support devices incorporating listening situation and noise type classifiers.

    PubMed

    Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young

    2015-04-01

    In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments.

  9. Coastal Zone Color Scanner atmospheric correction algorithm: multiple scattering effects.

    PubMed

    Gordon, H R; Castaño, D J

    1987-06-01

    An analysis of the errors due to multiple scattering which are expected to be encountered in application of the current Coastal Zone Color Scanner (CZCS) atmospheric correction algorithm is presented in detail. This was prompted by the observations of others that significant errors would be encountered if the present algorithm were applied to a hypothetical instrument possessing higher radiometric sensitivity than the present CZCS. This study provides CZCS users sufficient information with which to judge the efficacy of the current algorithm with the current sensor and enables them to estimate the impact of the algorithm-induced errors on their applications in a variety of situations. The greatest source of error is the assumption that the molecular and aerosol contributions to the total radiance observed at the sensor can be computed separately. This leads to the requirement that a value epsilon'(lambda,lambda(0)) for the atmospheric correction parameter, which bears little resemblance to its theoretically meaningful counterpart, must usually be employed in the algorithm to obtain an accurate atmospheric correction. The behavior of '(lambda,lambda(0)) with the aerosol optical thickness and aerosol phase function is thoroughly investigated through realistic modeling of radiative transfer in a stratified atmosphere over a Fresnel reflecting ocean. A unique feature of the analysis is that it is carried out in scan coordinates rather than typical earth-sun coordinates allowing elucidation of the errors along typical CZCS scan lines; this is important since, in the normal application of the algorithm, it is assumed that the same value of can be used for an entire CZCS scene or at least for a reasonably large subscene. Two types of variation of ' are found in models for which it would be constant in the single scattering approximation: (1) variation with scan angle in scenes in which a relatively large portion of the aerosol scattering phase function would be examined

  10. Development of Topological Correction Algorithms for ADCP Multibeam Bathymetry Measurements

    NASA Astrophysics Data System (ADS)

    Yang, Sung-Kee; Kim, Dong-Su; Kim, Soo-Jeong; Jung, Woo-Yul

    2013-04-01

    Acoustic Doppler Current Profilers (ADCPs) are increasingly popular in the river research and management communities being primarily used for estimation of stream flows. ADCPs capabilities, however, entail additional features that are not fully explored, such as morphologic representation of river or reservoir bed based upon multi-beam depth measurements. In addition to flow velocity, ADCP measurements include river bathymetry information through the depth measurements acquired in individual 4 or 5 beams with a given oblique angle. Such sounding capability indicates that multi-beam ADCPs can be utilized as an efficient depth-sounder to be more capable than the conventional single-beam eco-sounders. The paper introduces the post-processing algorithms required to deal with raw ADCP bathymetry measurements including the following aspects: a) correcting the individual beam depths for tilt (pitch and roll); b) filtering outliers using SMART filters; d) transforming the corrected depths into geographical coordinates by UTM conversion; and, e) tag the beam detecting locations with the concurrent GPS information; f) spatial representation in a GIS package. The developed algorithms are applied for the ADCP bathymetric dataset acquired from Han-Cheon in Juju Island to validate their applicability.

  11. Goldindec: A Novel Algorithm for Raman Spectrum Baseline Correction

    PubMed Central

    Liu, Juntao; Sun, Jianyang; Huang, Xiuzhen; Li, Guojun; Liu, Binqiang

    2016-01-01

    Raman spectra have been widely used in biology, physics, and chemistry and have become an essential tool for the studies of macromolecules. Nevertheless, the raw Raman signal is often obscured by a broad background curve (or baseline) due to the intrinsic fluorescence of the organic molecules, which leads to unpredictable negative effects in quantitative analysis of Raman spectra. Therefore, it is essential to correct this baseline before analyzing raw Raman spectra. Polynomial fitting has proven to be the most convenient and simplest method and has high accuracy. In polynomial fitting, the cost function used and its parameters are crucial. This article proposes a novel iterative algorithm named Goldindec, freely available for noncommercial use as noted in text, with a new cost function that not only conquers the influence of great peaks but also solves the problem of low correction accuracy when there is a high peak number. Goldindec automatically generates parameters from the raw data rather than by empirical choice, as in previous methods. Comparisons with other algorithms on the benchmark data show that Goldindec has a higher accuracy and computational efficiency, and is hardly affected by great peaks, peak number, and wavenumber. PMID:26037638

  12. Quantum Error Correction Protects Quantum Search Algorithms Against Decoherence.

    PubMed

    Botsinis, Panagiotis; Babar, Zunaira; Alanis, Dimitrios; Chandra, Daryus; Nguyen, Hung; Ng, Soon Xin; Hanzo, Lajos

    2016-12-07

    When quantum computing becomes a wide-spread commercial reality, Quantum Search Algorithms (QSA) and especially Grover's QSA will inevitably be one of their main applications, constituting their cornerstone. Most of the literature assumes that the quantum circuits are free from decoherence. Practically, decoherence will remain unavoidable as is the Gaussian noise of classic circuits imposed by the Brownian motion of electrons, hence it may have to be mitigated. In this contribution, we investigate the effect of quantum noise on the performance of QSAs, in terms of their success probability as a function of the database size to be searched, when decoherence is modelled by depolarizing channels' deleterious effects imposed on the quantum gates. Moreover, we employ quantum error correction codes for limiting the effects of quantum noise and for correcting quantum flips. More specifically, we demonstrate that, when we search for a single solution in a database having 4096 entries using Grover's QSA at an aggressive depolarizing probability of 10(-3), the success probability of the search is 0.22 when no quantum coding is used, which is improved to 0.96 when Steane's quantum error correction code is employed. Finally, apart from Steane's code, the employment of Quantum Bose-Chaudhuri-Hocquenghem (QBCH) codes is also considered.

  13. Quantum Error Correction Protects Quantum Search Algorithms Against Decoherence

    NASA Astrophysics Data System (ADS)

    Botsinis, Panagiotis; Babar, Zunaira; Alanis, Dimitrios; Chandra, Daryus; Nguyen, Hung; Ng, Soon Xin; Hanzo, Lajos

    2016-12-01

    When quantum computing becomes a wide-spread commercial reality, Quantum Search Algorithms (QSA) and especially Grover’s QSA will inevitably be one of their main applications, constituting their cornerstone. Most of the literature assumes that the quantum circuits are free from decoherence. Practically, decoherence will remain unavoidable as is the Gaussian noise of classic circuits imposed by the Brownian motion of electrons, hence it may have to be mitigated. In this contribution, we investigate the effect of quantum noise on the performance of QSAs, in terms of their success probability as a function of the database size to be searched, when decoherence is modelled by depolarizing channels’ deleterious effects imposed on the quantum gates. Moreover, we employ quantum error correction codes for limiting the effects of quantum noise and for correcting quantum flips. More specifically, we demonstrate that, when we search for a single solution in a database having 4096 entries using Grover’s QSA at an aggressive depolarizing probability of 10‑3, the success probability of the search is 0.22 when no quantum coding is used, which is improved to 0.96 when Steane’s quantum error correction code is employed. Finally, apart from Steane’s code, the employment of Quantum Bose-Chaudhuri-Hocquenghem (QBCH) codes is also considered.

  14. Quantum Error Correction Protects Quantum Search Algorithms Against Decoherence

    PubMed Central

    Botsinis, Panagiotis; Babar, Zunaira; Alanis, Dimitrios; Chandra, Daryus; Nguyen, Hung; Ng, Soon Xin; Hanzo, Lajos

    2016-01-01

    When quantum computing becomes a wide-spread commercial reality, Quantum Search Algorithms (QSA) and especially Grover’s QSA will inevitably be one of their main applications, constituting their cornerstone. Most of the literature assumes that the quantum circuits are free from decoherence. Practically, decoherence will remain unavoidable as is the Gaussian noise of classic circuits imposed by the Brownian motion of electrons, hence it may have to be mitigated. In this contribution, we investigate the effect of quantum noise on the performance of QSAs, in terms of their success probability as a function of the database size to be searched, when decoherence is modelled by depolarizing channels’ deleterious effects imposed on the quantum gates. Moreover, we employ quantum error correction codes for limiting the effects of quantum noise and for correcting quantum flips. More specifically, we demonstrate that, when we search for a single solution in a database having 4096 entries using Grover’s QSA at an aggressive depolarizing probability of 10−3, the success probability of the search is 0.22 when no quantum coding is used, which is improved to 0.96 when Steane’s quantum error correction code is employed. Finally, apart from Steane’s code, the employment of Quantum Bose-Chaudhuri-Hocquenghem (QBCH) codes is also considered. PMID:27924865

  15. The Algorithm Theoretical Basis Document for Tidal Corrections

    NASA Technical Reports Server (NTRS)

    Fricker, Helen A.; Ridgway, Jeff R.; Minster, Jean-Bernard; Yi, Donghui; Bentley, Charles R.`

    2012-01-01

    This Algorithm Theoretical Basis Document deals with the tidal corrections that need to be applied to range measurements made by the Geoscience Laser Altimeter System (GLAS). These corrections result from the action of ocean tides and Earth tides which lead to deviations from an equilibrium surface. Since the effect of tides is dependent of the time of measurement, it is necessary to remove the instantaneous tide components when processing altimeter data, so that all measurements are made to the equilibrium surface. The three main tide components to consider are the ocean tide, the solid-earth tide and the ocean loading tide. There are also long period ocean tides and the pole tide. The approximate magnitudes of these components are illustrated in Table 1, together with estimates of their uncertainties (i.e. the residual error after correction). All of these components are important for GLAS measurements over the ice sheets since centimeter-level accuracy for surface elevation change detection is required. The effect of each tidal component is to be removed by approximating their magnitude using tidal prediction models. Conversely, assimilation of GLAS measurements into tidal models will help to improve them, especially at high latitudes.

  16. Atmospheric Correction Algorithm for Hyperspectral Remote Sensing of Ocean Color from Space

    DTIC Science & Technology

    2000-02-20

    Existing atmospheric correction algorithms for multichannel remote sensing of ocean color from space were designed for retrieving water-leaving...atmospheric correction algorithm for hyperspectral remote sensing of ocean color with the near-future Coastal Ocean Imaging Spectrometer. The algorithm uses

  17. Supervised Machine Learning Algorithms Can Classify Open-Text Feedback of Doctor Performance With Human-Level Accuracy

    PubMed Central

    2017-01-01

    Background Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor’s activity for the purposes of quality assurance, safety, and continuing professional development. Objective The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors’ professional performance in the United Kingdom. Methods We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians’ colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Results Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to “popular” (recall=.97), “innovator” (recall=.98), and “respected” (recall=.87) codes and was lower for the “interpersonal” (recall=.80) and “professional” (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as “respected,” “professional,” and “interpersonal” related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P<.05). Scores did not vary between doctors who were rated as popular or

  18. Structure and weights optimisation of a modified Elman network emotion classifier using hybrid computational intelligence algorithms: a comparative study

    NASA Astrophysics Data System (ADS)

    Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood

    2015-10-01

    Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.

  19. Practical algorithms for algebraic and logical correction in precedent-based recognition problems

    NASA Astrophysics Data System (ADS)

    Ablameyko, S. V.; Biryukov, A. S.; Dokukin, A. A.; D'yakonov, A. G.; Zhuravlev, Yu. I.; Krasnoproshin, V. V.; Obraztsov, V. A.; Romanov, M. Yu.; Ryazanov, V. V.

    2014-12-01

    Practical precedent-based recognition algorithms relying on logical or algebraic correction of various heuristic recognition algorithms are described. The recognition problem is solved in two stages. First, an arbitrary object is recognized independently by algorithms from a group. Then a final collective solution is produced by a suitable corrector. The general concepts of the algebraic approach are presented, practical algorithms for logical and algebraic correction are described, and results of their comparison are given.

  20. Classifying performance impairment in response to sleep loss using pattern recognition algorithms on single session testing

    PubMed Central

    St. Hilaire, Melissa A.; Sullivan, Jason P.; Anderson, Clare; Cohen, Daniel A.; Barger, Laura K.; Lockley, Steven W.; Klerman, Elizabeth B.

    2012-01-01

    There is currently no “gold standard” marker of cognitive performance impairment resulting from sleep loss. We utilized pattern recognition algorithms to determine which features of data collected under controlled laboratory conditions could most reliably identify cognitive performance impairment in response to sleep loss using data from only one testing session, such as would occur in the “real world” or field conditions. A training set for testing the pattern recognition algorithms was developed using objective Psychomotor Vigilance Task (PVT) and subjective Karolinska Sleepiness Scale (KSS) data collected from laboratory studies during which subjects were sleep deprived for 26 – 52 hours. The algorithm was then tested in data from both laboratory and field experiments. The pattern recognition algorithm was able to identify performance impairment with a single testing session in individuals studied under laboratory conditions using PVT, KSS, length of time awake and time of day information with sensitivity and specificity as high as 82%. When this algorithm was tested on data collected under real-world conditions from individuals whose data were not in the training set, accuracy of predictions for individuals categorized with low performance impairment were as high as 98%. Predictions for medium and severe performance impairment were less accurate. We conclude that pattern recognition algorithms may be a promising method for identifying performance impairment in individuals using only current information about the individual’s behavior. Single testing features (e.g., number of PVT lapses) with high correlation with performance impairment in the laboratory setting may not be the best indicators of performance impairment under real-world conditions. Pattern recognition algorithms should be further tested for their ability to be used in conjunction with other assessments of sleepiness in real-world conditions to quantify performance impairment in

  1. An algorithm for the treatment of schizophrenia in the correctional setting: the Forensic Algorithm Project.

    PubMed

    Buscema, C A; Abbasi, Q A; Barry, D J; Lauve, T H

    2000-10-01

    The Forensic Algorithm Project (FAP) was born of the need for a holistic approach in the treatment of the inmate with schizophrenia. Schizophrenia was chosen as the first entity to be addressed by the algorithm because of its refractory nature and high rate of recidivism in the correctional setting. Schizophrenia is regarded as a spectrum disorder, with symptom clusters and behaviors ranging from positive to negative symptoms to neurocognitive dysfunction and affective instability. Furthermore, the clinical picture is clouded by Axis II symptomatology (particularly prominent in the inmate population), comorbid Axis I disorders, and organicity. Four subgroups of schizophrenia were created to coincide with common clinical presentations in the forensic inpatient facility and also to parallel 4 tracks of intervention, consisting of pharmacologic management and programming recommendations. The algorithm begins with any antipsychotic medication and proceeds to atypical neuroleptic usage, augmentation with other psychotropic agents, and, finally, the use of clozapine as the common pathway for refractory schizophrenia. Outcome measurement of pharmacologic intervention is assessed every 6 weeks through the use of a 4-item subscale, specific for each forensic subgroup. A "floating threshold" of 40% symptom severity reduction on Positive and Negative Syndrome Scale and Brief Psychiatric Rating Scale items over a 6-week period is considered an indication for neuroleptic continuation. The forensic algorithm differs from other clinical practice guidelines in that specific programming in certain prison environments is stipulated. Finally, a social commentary on the importance of state-of-the-art psychiatric treatment for all members of society is woven into the clinical tapestry of this article.

  2. Atmospheric Correction Prototype Algorithm for High Spatial Resolution Multispectral Earth Observing Imaging Systems

    NASA Technical Reports Server (NTRS)

    Pagnutti, Mary

    2006-01-01

    This viewgraph presentation reviews the creation of a prototype algorithm for atmospheric correction using high spatial resolution earth observing imaging systems. The objective of the work was to evaluate accuracy of a prototype algorithm that uses satellite-derived atmospheric products to generate scene reflectance maps for high spatial resolution (HSR) systems. This presentation focused on preliminary results of only the satellite-based atmospheric correction algorithm.

  3. A fuzzy hill-climbing algorithm for the development of a compact associative classifier

    NASA Astrophysics Data System (ADS)

    Mitra, Soumyaroop; Lam, Sarah S.

    2012-02-01

    Classification, a data mining technique, has widespread applications including medical diagnosis, targeted marketing, and others. Knowledge discovery from databases in the form of association rules is one of the important data mining tasks. An integrated approach, classification based on association rules, has drawn the attention of the data mining community over the last decade. While attention has been mainly focused on increasing classifier accuracies, not much efforts have been devoted towards building interpretable and less complex models. This paper discusses the development of a compact associative classification model using a hill-climbing approach and fuzzy sets. The proposed methodology builds the rule-base by selecting rules which contribute towards increasing training accuracy, thus balancing classification accuracy with the number of classification association rules. The results indicated that the proposed associative classification model can achieve competitive accuracies on benchmark datasets with continuous attributes and lend better interpretability, when compared with other rule-based systems.

  4. Memory based active contour algorithm using pixel-level classified images for colon crypt segmentation.

    PubMed

    Cohen, Assaf; Rivlin, Ehud; Shimshoni, Ilan; Sabo, Edmond

    2015-07-01

    In this paper, we introduce a novel method for detection and segmentation of crypts in colon biopsies. Most of the approaches proposed in the literature try to segment the crypts using only the biopsy image without understanding the meaning of each pixel. The proposed method differs in that we segment the crypts using an automatically generated pixel-level classification image of the original biopsy image and handle the artifacts due to the sectioning process and variance in color, shape and size of the crypts. The biopsy image pixels are classified to nuclei, immune system, lumen, cytoplasm, stroma and goblet cells. The crypts are then segmented using a novel active contour approach, where the external force is determined by the semantics of each pixel and the model of the crypt. The active contour is applied for every lumen candidate detected using the pixel-level classification. Finally, a false positive crypt elimination process is performed to remove segmentation errors. This is done by measuring their adherence to the crypt model using the pixel level classification results. The method was tested on 54 biopsy images containing 4944 healthy and 2236 cancerous crypts, resulting in 87% detection of the crypts with 9% of false positive segments (segments that do not represent a crypt). The segmentation accuracy of the true positive segments is 96%.

  5. A Comparison Study of Classifier Algorithms for Cross-Person Physical Activity Recognition.

    PubMed

    Saez, Yago; Baldominos, Alejandro; Isasi, Pedro

    2016-12-30

    Physical activity is widely known to be one of the key elements of a healthy life. The many benefits of physical activity described in the medical literature include weight loss and reductions in the risk factors for chronic diseases. With the recent advances in wearable devices, such as smartwatches or physical activity wristbands, motion tracking sensors are becoming pervasive, which has led to an impressive growth in the amount of physical activity data available and an increasing interest in recognizing which specific activity a user is performing. Moreover, big data and machine learning are now cross-fertilizing each other in an approach called "deep learning", which consists of massive artificial neural networks able to detect complicated patterns from enormous amounts of input data to learn classification models. This work compares various state-of-the-art classification techniques for automatic cross-person activity recognition under different scenarios that vary widely in how much information is available for analysis. We have incorporated deep learning by using Google's TensorFlow framework. The data used in this study were acquired from PAMAP2 (Physical Activity Monitoring in the Ageing Population), a publicly available dataset containing physical activity data. To perform cross-person prediction, we used the leave-one-subject-out (LOSO) cross-validation technique. When working with large training sets, the best classifiers obtain very high average accuracies (e.g., 96% using extra randomized trees). However, when the data volume is drastically reduced (where available data are only 0.001% of the continuous data), deep neural networks performed the best, achieving 60% in overall prediction accuracy. We found that even when working with only approximately 22.67% of the full dataset, we can statistically obtain the same results as when working with the full dataset. This finding enables the design of more energy-efficient devices and facilitates cold

  6. A Comparison Study of Classifier Algorithms for Cross-Person Physical Activity Recognition

    PubMed Central

    Saez, Yago; Baldominos, Alejandro; Isasi, Pedro

    2016-01-01

    Physical activity is widely known to be one of the key elements of a healthy life. The many benefits of physical activity described in the medical literature include weight loss and reductions in the risk factors for chronic diseases. With the recent advances in wearable devices, such as smartwatches or physical activity wristbands, motion tracking sensors are becoming pervasive, which has led to an impressive growth in the amount of physical activity data available and an increasing interest in recognizing which specific activity a user is performing. Moreover, big data and machine learning are now cross-fertilizing each other in an approach called “deep learning”, which consists of massive artificial neural networks able to detect complicated patterns from enormous amounts of input data to learn classification models. This work compares various state-of-the-art classification techniques for automatic cross-person activity recognition under different scenarios that vary widely in how much information is available for analysis. We have incorporated deep learning by using Google’s TensorFlow framework. The data used in this study were acquired from PAMAP2 (Physical Activity Monitoring in the Ageing Population), a publicly available dataset containing physical activity data. To perform cross-person prediction, we used the leave-one-subject-out (LOSO) cross-validation technique. When working with large training sets, the best classifiers obtain very high average accuracies (e.g., 96% using extra randomized trees). However, when the data volume is drastically reduced (where available data are only 0.001% of the continuous data), deep neural networks performed the best, achieving 60% in overall prediction accuracy. We found that even when working with only approximately 22.67% of the full dataset, we can statistically obtain the same results as when working with the full dataset. This finding enables the design of more energy-efficient devices and facilitates cold

  7. A correction to a highly accurate voight function algorithm

    NASA Technical Reports Server (NTRS)

    Shippony, Z.; Read, W. G.

    2002-01-01

    An algorithm for rapidly computing the complex Voigt function was published by Shippony and Read. Its claimed accuracy was 1 part in 10^8. It was brought to our attention by Wells that Shippony and Read was not meeting its claimed accuracy for extremely small but non zero y values. Although true, the fix to the code is so trivial to warrant this note for those who use this algorithm.

  8. Evaluation of algorithms for automated phase correction of NMR spectra.

    PubMed

    de Brouwer, Hans

    2009-12-01

    In our attempt to fully automate the data acquisition and processing of NMR analysis of dissolved synthetic polymers, phase correction was found to be the most challenging aspect. Several approaches in literature were evaluated but none of these was found to be capable of phasing NMR spectra with sufficient robustness and high enough accuracy to fully eliminate intervention by a human operator. Step by step, aspects from the process of manual/visual phase correction were translated into mathematical concepts and evaluated. This included area minimization, peak height maximization, negative peak minimization and baseline correction. It was found that not one single approach would lead to acceptable results but that a combination of aspects was required, in line again with the process of manual phase correction. The combination of baseline correction, area minimization and negative area penalization was found to give the desired results. The robustness was found to be 100% which means that the correct zeroth order and first order phasing parameters are returned independent of the position of the starting point of the search in this parameter space. When applied to high signal-to-noise proton spectra, the accuracy was such that the returned phasing parameters were within a distance of 0.1-0.4 degrees in the two dimensional parameter space which resulted in an average error of 0.1% in calculated properties such as copolymer composition and end groups.

  9. Feature selection for outcome prediction in oesophageal cancer using genetic algorithm and random forest classifier.

    PubMed

    Paul, Desbordes; Su, Ruan; Romain, Modzelewski; Sébastien, Vauclin; Pierre, Vera; Isabelle, Gardin

    2016-12-28

    The outcome prediction of patients can greatly help to personalize cancer treatment. A large amount of quantitative features (clinical exams, imaging, …) are potentially useful to assess the patient outcome. The challenge is to choose the most predictive subset of features. In this paper, we propose a new feature selection strategy called GARF (genetic algorithm based on random forest) extracted from positron emission tomography (PET) images and clinical data. The most relevant features, predictive of the therapeutic response or which are prognoses of the patient survival 3 years after the end of treatment, were selected using GARF on a cohort of 65 patients with a local advanced oesophageal cancer eligible for chemo-radiation therapy. The most relevant predictive results were obtained with a subset of 9 features leading to a random forest misclassification rate of 18±4% and an areas under the of receiver operating characteristic (ROC) curves (AUC) of 0.823±0.032. The most relevant prognostic results were obtained with 8 features leading to an error rate of 20±7% and an AUC of 0.750±0.108. Both predictive and prognostic results show better performances using GARF than using 4 other studied methods.

  10. [Baseline Correction Algorithm for Raman Spectroscopy Based on Non-Uniform B-Spline].

    PubMed

    Fan, Xian-guang; Wang, Hai-tao; Wang, Xin; Xu, Ying-jie; Wang, Xiu-fen; Que, Jing

    2016-03-01

    As one of the necessary steps for data processing of Raman spectroscopy, baseline correction is commonly used to eliminate the interference of fluorescence spectra. The traditional baseline correction algorithm based on polynomial fitting is simple and easy to implement, but its flexibility is poor due to the uncertain fitting order. In this paper, instead of using polynomial fitting, non-uniform B-spline is proposed to overcome the shortcomings of the traditional method. Based on the advantages of the traditional algorithm, the node vector of non-uniform B-spline is fixed adaptively using the peak position of the original Raman spectrum, and then the baseline is fitted with the fixed order. In order to verify this algorithm, the Raman spectra of parathion-methyl and colza oil are detected and their baselines are corrected using this algorithm, the result is made comparison with two other baseline correction algorithms. The experimental results show that the effect of baseline correction is improved by using this algorithm with a fixed fitting order and less parameters, and there is no over or under fitting phenomenon. Therefore, non-uniform B-spline is proved to be an effective baseline correction algorithm of Raman spectroscopy.

  11. An efficient algorithm coupled with synthetic minority over-sampling technique to classify imbalanced PubChem BioAssay data.

    PubMed

    Hao, Ming; Wang, Yanli; Bryant, Stephen H

    2014-01-02

    It is common that imbalanced datasets are often generated from high-throughput screening (HTS). For a given dataset without taking into account the imbalanced nature, most classification methods tend to produce high predictive accuracy for the majority class, but significantly poor performance for the minority class. In this work, an efficient algorithm, GLMBoost, coupled with Synthetic Minority Over-sampling TEchnique (SMOTE) is developed and utilized to overcome the problem for several imbalanced datasets from PubChem BioAssay. By applying the proposed combinatorial method, those data of rare samples (active compounds), for which usually poor results are generated, can be detected apparently with high balanced accuracy (Gmean). As a comparison with GLMBoost, Random Forest (RF) combined with SMOTE is also adopted to classify the same datasets. Our results show that the former (GLMBoost+SMOTE) not only exhibits higher performance as measured by the percentage of correct classification for the rare samples (Sensitivity) and Gmean, but also demonstrates greater computational efficiency than the latter (RF+SMOTE). Therefore, we hope that the proposed combinatorial algorithm based on GLMBoost and SMOTE could be extensively used to tackle the imbalanced classification problem.

  12. Weighted SVD algorithm for close-orbit correction and 10 Hz feedback in RHIC

    SciTech Connect

    Liu C.; Hulsart, R.; Marusic, A.; Michnoff, R.; Minty, M.; Ptitsyn, V.

    2012-05-20

    Measurements of the beam position along an accelerator are typically treated equally using standard SVD-based orbit correction algorithms so distributing the residual errors, modulo the local beta function, equally at the measurement locations. However, sometimes a more stable orbit at select locations is desirable. In this paper, we introduce an algorithm for weighting the beam position measurements to achieve a more stable local orbit. The results of its application to close-orbit correction and 10 Hz orbit feedback are presented.

  13. Algorithm-supported visual error correction (AVEC) of heart rate measurements in dogs, Canis lupus familiaris.

    PubMed

    Schöberl, Iris; Kortekaas, Kim; Schöberl, Franz F; Kotrschal, Kurt

    2015-12-01

    Dog heart rate (HR) is characterized by a respiratory sinus arrhythmia, and therefore makes an automatic algorithm for error correction of HR measurements hard to apply. Here, we present a new method of error correction for HR data collected with the Polar system, including (1) visual inspection of the data, (2) a standardized way to decide with the aid of an algorithm whether or not a value is an outlier (i.e., "error"), and (3) the subsequent removal of this error from the data set. We applied our new error correction method to the HR data of 24 dogs and compared the uncorrected and corrected data, as well as the algorithm-supported visual error correction (AVEC) with the Polar error correction. The results showed that fewer values were identified as errors after AVEC than after the Polar error correction (p < .001). After AVEC, the HR standard deviation and variability (HRV; i.e., RMSSD, pNN50, and SDNN) were significantly greater than after correction by the Polar tool (all p < .001). Furthermore, the HR data strings with deleted values seemed to be closer to the original data than were those with inserted means. We concluded that our method of error correction is more suitable for dog HR and HR variability than is the customized Polar error correction, especially because AVEC decreases the likelihood of Type I errors, preserves the natural variability in HR, and does not lead to a time shift in the data.

  14. A computational algorithm for classifying step and spin turns using pelvic center of mass trajectory and foot position.

    PubMed

    Golyski, Pawel R; Hendershot, Brad D

    2017-01-30

    Transient changes in direction during ambulation are typically performed using a step (outside) or spin (inside) turning strategy, often identified through subjective and time-consuming visual rating. Here, we present a computational, marker-based classification method utilizing pelvic center of mass (pCOM) trajectory and time-distance parameters to quantitatively identify turning strategy. Relative to visual evaluation by three independent raters, sensitivity, specificity, and overall accuracy of the pCOM-based classification method were evaluated for 90-degree turns performed by 3 separate populations (5 uninjured controls, 5 persons with transtibial amputation, and 5 persons with transfemoral amputation); each completed turns using two distinct cueing paradigms (i.e., laser-guided "freeform" and verbally-guided "forced" turns). Secondarily, we compared the pCOM-based turn classification method to adapted versions of two existing computational turn classifiers which utilize trunk and shank angular velocities (AV). Among 366 (of 486 total) turns with unanimous intra- and inter-rater agreement, the pCOM-based classification algorithm was 94.5% accurate, with 96.6% sensitivity (accuracy of spin turn classification), and 93.5% specificity (accuracy of step turn classification). The pCOM-based algorithm (vs. both AV-based methods) was more accurate (94.5% vs. 81.1-80.6%; P<0.001) overall, as well as specifically in freeform (92.9 vs. 80.4-76.8%; P<0.003) and forced (96.0 vs. 83.8-81.8%; P<0.001) cueing, and among individuals with (92.4 vs. 80.2-78.8%; P<0.001) and without (99.1 vs. 86.2-80.8%; P<0.001) amputation. The pCOM-based algorithm provides an efficient and objective method to accurately classify 90-degree turning strategies using optical motion capture in a laboratory setting, and may be extended to various cueing paradigms and/or populations with altered gait.

  15. Self-Correcting HVAC Controls: Algorithms for Sensors and Dampers in Air-Handling Units

    SciTech Connect

    Fernandez, Nicholas; Brambley, Michael R.; Katipamula, Srinivas

    2009-12-31

    This report documents the self-correction algorithms developed in the Self-Correcting Heating, Ventilating and Air-Conditioning (HVAC) Controls project funded jointly by the Bonneville Power Administration and the Building Technologies Program of the U.S. Department of Energy. The algorithms address faults for temperature sensors, humidity sensors, and dampers in air-handling units and correction of persistent manual overrides of automated control systems. All faults considered create energy waste when left uncorrected as is frequently the case in actual systems.

  16. Improved near-infrared ocean reflectance correction algorithm for satellite ocean color data processing.

    PubMed

    Jiang, Lide; Wang, Menghua

    2014-09-08

    A new approach for the near-infrared (NIR) ocean reflectance correction in atmospheric correction for satellite ocean color data processing in coastal and inland waters is proposed, which combines the advantages of the three existing NIR ocean reflectance correction algorithms, i.e., Bailey et al. (2010) [Opt. Express18, 7521 (2010)Appl. Opt.39, 897 (2000)Opt. Express20, 741 (2012)], and is named BMW. The normalized water-leaving radiance spectra nLw(λ) obtained from this new NIR-based atmospheric correction approach are evaluated against those obtained from the shortwave infrared (SWIR)-based atmospheric correction algorithm, as well as those from some existing NIR atmospheric correction algorithms based on several case studies. The scenes selected for case studies are obtained from two different satellite ocean color sensors, i.e., the Moderate Resolution Imaging Spectroradiometer (MODIS) on the satellite Aqua and the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (SNPP), with an emphasis on several turbid water regions in the world. The new approach has shown to produce nLw(λ) spectra most consistent with the SWIR results among all NIR algorithms. Furthermore, validations against the in situ measurements also show that in less turbid water regions the new approach produces reasonable and similar results comparable to the current operational algorithm. In addition, by combining the new NIR atmospheric correction with the SWIR-based approach, the new NIR-SWIR atmospheric correction can produce further improved ocean color products. The new NIR atmospheric correction can be implemented in a global operational satellite ocean color data processing system.

  17. A scene based nonuniformity correction algorithm for line scanning infrared image

    NASA Astrophysics Data System (ADS)

    Fan, Fan; Ma, Yong; Zhou, Bo; Fang, Yu; Han, Jinhui; Liu, Zhe

    2014-11-01

    In this paper, a fast scene based nonuniformity correction algorithm using Landweber iteration is proposed for line scanning infrared imaging systems (LSIR). The method introduces a novel framework of nonuniformity correction for LSIR by optimization. More specifically, first a "desired" image is obtained by an 1D Guassian filter from the corrected image; then a weighted mean square error optimization function is established in each line to minimize the mean square error between the corrected value and "desired" image. Correction parameters update adaptively by Landweber iteration, and then update the desired image. A stopping rule of the framework is also proposed. The quantitative comparisons with other state-of-the-art methods demonstrate that the proposed algorithm has low complexity and is much more robust on fixed-pattern noise reduction in the static scene.

  18. Robustness properties of hill-climbing algorithm based on Zernike modes for laser beam correction.

    PubMed

    Liu, Ying; Ma, Jianqiang; Chen, Junjie; Li, Baoqing; Chu, Jiaru

    2014-04-01

    A modified hill-climbing algorithm based on Zernike modes is used for laser beam correction. The algorithm adopts the Zernike mode coefficients, instead of the deformable mirror actuators' voltages in a traditional hill-climbing algorithm, as the adjustable variables to optimize the object function. The effect of the mismatches between the laser beam and the deformable mirror both in the aperture size and the center position was analyzed numerically and experimentally to test the robustness of the algorithm. Both simulation and experimental results show that the mismatches have almost no influence on the laser beam correction, unless the laser beam exceeds the effective aperture of the deformable mirror, which indicates the good robustness of the algorithm.

  19. Approximate string matching algorithms for limited-vocabulary OCR output correction

    NASA Astrophysics Data System (ADS)

    Lasko, Thomas A.; Hauser, Susan E.

    2000-12-01

    Five methods for matching words mistranslated by optical character recognition to their most likely match in a reference dictionary were tested on data from the archives of the National Library of Medicine. The methods, including an adaptation of the cross correlation algorithm, the generic edit distance algorithm, the edit distance algorithm with a probabilistic substitution matrix, Bayesian analysis, and Bayesian analysis on an actively thinned reference dictionary were implemented and their accuracy rates compared. Of the five, the Bayesian algorithm produced the most correct matches (87%), and had the advantage of producing scores that have a useful and practical interpretation.

  20. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm.

    PubMed

    McDonnell, Mark D; Tissera, Migel D; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.

  1. An Algorithm to Atmospherically Correct Visible and Thermal Airborne Imagery

    NASA Technical Reports Server (NTRS)

    Rickman, Doug L.; Luvall, Jeffrey C.; Schiller, Stephen; Arnold, James E. (Technical Monitor)

    2000-01-01

    The program Watts implements a system of physically based models developed by the authors, described elsewhere, for the removal of atmospheric effects in multispectral imagery. The band range we treat covers the visible, near IR and the thermal IR. Input to the program begins with atmospheric pal red models specifying transmittance and path radiance. The system also requires the sensor's spectral response curves and knowledge of the scanner's geometric definition. Radiometric characterization of the sensor during data acquisition is also necessary. While the authors contend that active calibration is critical for serious analytical efforts, we recognize that most remote sensing systems, either airborne or space borne, do not as yet attain that minimal level of sophistication. Therefore, Watts will also use semi-active calibration where necessary and available. All of the input is then reduced to common terms, in terms of the physical units. From this it Is then practical to convert raw sensor readings into geophysically meaningful units. There are a large number of intricate details necessary to bring an algorithm or this type to fruition and to even use the program. Further, at this stage of development the authors are uncertain as to the optimal presentation or minimal analytical techniques which users of this type of software must have. Therefore, Watts permits users to break out and analyze the input in various ways. Implemented in REXX under OS/2 the program is designed with attention to the probability that it will be ported to other systems and other languages. Further, as it is in REXX, it is relatively simple for anyone that is literate in any computer language to open the code and modify to meet their needs. The authors have employed Watts in their research addressing precision agriculture and urban heat island.

  2. A curvature filter and PDE based non-uniformity correction algorithm

    NASA Astrophysics Data System (ADS)

    Cheng, Kuanhong; Zhou, Huixin; Qin, Hanlin; Zhao, Dong; Qian, Kun; Rong, Shenghui; Yin, Shimin

    2016-10-01

    In this paper, a curvature filter and PDE based non-uniformity correction algorithm is proposed, the key point of this algorithm is the way to estimate FPN. We use anisotropic diffusion to smooth noise and Gaussian curvature filter to extract the details of original image. Then combine these two parts together by guided image filter and subtract the result from original image to get the crude approximation of FPN. After that, a Temporal Low Pass Filter (TLPF) is utilized to filter out random noise and get the accurate FPN. Finally, subtract the FPN from original image to achieve non-uniformity correction. The performance of this algorithm is tested with two infrared image sequences, and the experimental results show that the proposed method achieves a better non-uniformity correction performance.

  3. Executive functioning correctly classified diagnoses in patients with first-episode psychosis: evidence from a 2-year longitudinal study.

    PubMed

    Peña, Javier; Ojeda, Natalia; Segarra, Rafael; Eguiluz, Jose Ignacio; García, Jon; Gutiérrez, Miguel

    2011-03-01

    Few studies have analysed factors that predict the ultimate clinical diagnosis in first-episode psychosis (FEP), and none has included cognitive factors. Eighty-six FEP patients and 34 healthy controls were recruited and followed up for two years. Positive and negative symptoms, depression, mania, duration of untreated psychosis (DUP), premorbid functioning, functional outcome and neurocognition were assessed over 2 years. Logistic regression models revealed that Wisconsin Card Sorting Test correctly distinguished the patients ultimately diagnosed with schizophrenia (87%) from those with bipolar disorder (80%) and those with other psychoses (85%), for an overall correct-diagnosis rate of 84.4%. The prediction was stable despite the inclusion of clinical and affective symptoms, DUP, clinical impression, and functional outcome scores. Results highlight the importance of reconsidering neurocognition as a diagnostic criterion for psychosis and schizophrenia.

  4. Atmospheric correction algorithm for hyperspectral remote sensing of ocean color from space.

    PubMed

    Gao, B C; Montes, M J; Ahmad, Z; Davis, C O

    2000-02-20

    Existing atmospheric correction algorithms for multichannel remote sensing of ocean color from space were designed for retrieving water-leaving radiances in the visible over clear deep ocean areas and cannot easily be modified for retrievals over turbid coastal waters. We have developed an atmospheric correction algorithm for hyperspectral remote sensing of ocean color with the near-future Coastal Ocean Imaging Spectrometer. The algorithm uses look-up tables generated with a vector radiative transfer code. Aerosol parameters are determined by a spectrum-matching technique that uses channels located at wavelengths longer than 0.86 mum. The aerosol information is extracted back to the visible based on aerosol models during the retrieval of water-leaving radiances. Quite reasonable water-leaving radiances have been obtained when our algorithm was applied to process hyperspectral imaging data acquired with an airborne imaging spectrometer.

  5. Improving the efficacy of ERP-based BCIs using different modalities of covert visuospatial attention and a genetic algorithm-based classifier.

    PubMed

    Marchetti, Mauro; Onorati, Francesco; Matteucci, Matteo; Mainardi, Luca; Piccione, Francesco; Silvoni, Stefano; Priftis, Konstantinos

    2013-01-01

    We investigated whether the covert orienting of visuospatial attention can be effectively used in a brain-computer interface guided by event-related potentials. Three visual interfaces were tested: one interface that activated voluntary orienting of visuospatial attention and two interfaces that elicited automatic orienting of visuospatial attention. We used two epoch classification procedures. The online epoch classification was performed via Independent Component Analysis, and then it was followed by fixed features extraction and support vector machines classification. The offline epoch classification was performed by means of a genetic algorithm that permitted us to retrieve the relevant features of the signal, and then to categorise the features with a logistic classifier. The offline classification, but not the online one, allowed us to differentiate between the performances of the interface that required voluntary orienting of visuospatial attention and those that required automatic orienting of visuospatial attention. The offline classification revealed an advantage of the participants in using the "voluntary" interface. This advantage was further supported, for the first time, by neurophysiological data. Moreover, epoch analysis was performed better with the "genetic algorithm classifier" than with the "independent component analysis classifier". We suggest that the combined use of voluntary orienting of visuospatial attention and of a classifier that permits feature extraction ad personam (i.e., genetic algorithm classifier) can lead to a more efficient control of visual BCIs.

  6. Assessment, Validation, and Refinement of the Atmospheric Correction Algorithm for the Ocean Color Sensors. Chapter 19

    NASA Technical Reports Server (NTRS)

    Wang, Menghua

    2003-01-01

    The primary focus of this proposed research is for the atmospheric correction algorithm evaluation and development and satellite sensor calibration and characterization. It is well known that the atmospheric correction, which removes more than 90% of sensor-measured signals contributed from atmosphere in the visible, is the key procedure in the ocean color remote sensing (Gordon and Wang, 1994). The accuracy and effectiveness of the atmospheric correction directly affect the remotely retrieved ocean bio-optical products. On the other hand, for ocean color remote sensing, in order to obtain the required accuracy in the derived water-leaving signals from satellite measurements, an on-orbit vicarious calibration of the whole system, i.e., sensor and algorithms, is necessary. In addition, it is important to address issues of (i) cross-calibration of two or more sensors and (ii) in-orbit vicarious calibration of the sensor-atmosphere system. The goal of these researches is to develop methods for meaningful comparison and possible merging of data products from multiple ocean color missions. In the past year, much efforts have been on (a) understanding and correcting the artifacts appeared in the SeaWiFS-derived ocean and atmospheric produces; (b) developing an efficient method in generating the SeaWiFS aerosol lookup tables, (c) evaluating the effects of calibration error in the near-infrared (NIR) band to the atmospheric correction of the ocean color remote sensors, (d) comparing the aerosol correction algorithm using the singlescattering epsilon (the current SeaWiFS algorithm) vs. the multiple-scattering epsilon method, and (e) continuing on activities for the International Ocean-Color Coordinating Group (IOCCG) atmospheric correction working group. In this report, I will briefly present and discuss these and some other research activities.

  7. Simplified Decoding of Trellis-Based Error-Correcting Modulation Codes Using the M-Algorithm for Holographic Data Storage

    NASA Astrophysics Data System (ADS)

    Kim, Jinyoung; Lee, Jaejin

    2012-08-01

    In this paper, we investigate a simplified decoding method for the trellis-based error-correcting modulation codes using the M-algorithm for holographic data storage. The M-algorithm, which sacrifices the bit error rate performance, can reduce the Viterbi algorithm's complexity. When the M-algorithm is used in the trellis-based error-correcting modulation codes, common delay and complexity problems can be reduced.

  8. A FORTRAN algorithm for correcting normal resistivity logs for borehole diameter and mud resistivity

    USGS Publications Warehouse

    Scott, James Henry

    1978-01-01

    The FORTRAN algorithm described in this report was developed for applying corrections to normal resistivity logs of any electrode spacing for the effects of drilling mud of known resistivity in boreholes of variable diameter. The corrections are based on Schlumberger departure curves that are applicable to normal logs made with a standard Schlumberger electric logging probe with an electrode diameter of 8.5 cm (3.35 in). The FORTRAN algorithm has been generalized to accommodate logs made with other probes with different electrode diameters. Two simplifying assumptions used by Schlumberger in developing the departure curves also apply to the algorithm: (1) bed thickness is assumed to be infinite (at least 10 times larger than the electrode spacing), and (2) invasion of drilling mud into the formation is assumed to be negligible. * The use of a trade name does not necessarily constitute endorsement by the U.S. Geological Survey.

  9. Cardamine occulta, the correct species name for invasive Asian plants previously classified as C. flexuosa, and its occurrence in Europe

    PubMed Central

    Marhold, Karol; Šlenker, Marek; Kudoh, Hiroshi; Zozomová-Lihová, Judita

    2016-01-01

    Abstract The nomenclature of Eastern Asian populations traditionally assigned to Cardamine flexuosa has remained unresolved since 2006, when they were found to be distinct from the European species Cardamine flexuosa. Apart from the informal designation “Asian Cardamine flexuosa”, this taxon has also been reported under the names Cardamine flexuosa subsp. debilis or Cardamine hamiltonii. Here we determine its correct species name to be Cardamine occulta and present a nomenclatural survey of all relevant species names. A lectotype and epitype for Cardamine occulta and a neotype for the illegitimate name Cardamine debilis (replaced by Cardamine flexuosa subsp. debilis and Cardamine hamiltonii) are designated here. Cardamine occulta is a polyploid weed that most likely originated in Eastern Asia, but it has also been introduced to other continents, including Europe. Here data is presented on the first records of this invasive species in European countries. The first known record for Europe was made in Spain in 1993, and since then its occurrence has been reported from a number of European countries and regions as growing in irrigated anthropogenic habitats, such as paddy fields or flower beds, and exceptionally also in natural communities such as lake shores. PMID:27212882

  10. Evaluation of two Vaisala RS92 radiosonde solar radiative dry bias correction algorithms

    DOE PAGES

    Dzambo, Andrew M.; Turner, David D.; Mlawer, Eli J.

    2016-04-12

    Solar heating of the relative humidity (RH) probe on Vaisala RS92 radiosondes results in a large dry bias in the upper troposphere. Two different algorithms (Miloshevich et al., 2009, MILO hereafter; and Wang et al., 2013, WANG hereafter) have been designed to account for this solar radiative dry bias (SRDB). These corrections are markedly different with MILO adding up to 40 % more moisture to the original radiosonde profile than WANG; however, the impact of the two algorithms varies with height. The accuracy of these two algorithms is evaluated using three different approaches: a comparison of precipitable water vapor (PWV), downwellingmore » radiative closure with a surface-based microwave radiometer at a high-altitude site (5.3 km m.s.l.), and upwelling radiative closure with the space-based Atmospheric Infrared Sounder (AIRS). The PWV computed from the uncorrected and corrected RH data is compared against PWV retrieved from ground-based microwave radiometers at tropical, midlatitude, and arctic sites. Although MILO generally adds more moisture to the original radiosonde profile in the upper troposphere compared to WANG, both corrections yield similar changes to the PWV, and the corrected data agree well with the ground-based retrievals. The two closure activities – done for clear-sky scenes – use the radiative transfer models MonoRTM and LBLRTM to compute radiance from the radiosonde profiles to compare against spectral observations. Both WANG- and MILO-corrected RHs are statistically better than original RH in all cases except for the driest 30 % of cases in the downwelling experiment, where both algorithms add too much water vapor to the original profile. In the upwelling experiment, the RH correction applied by the WANG vs. MILO algorithm is statistically different above 10 km for the driest 30 % of cases and above 8 km for the moistest 30 % of cases, suggesting that the MILO correction performs better than the WANG in clear-sky scenes

  11. A survey of the baseline correction algorithms for real-time spectroscopy processing

    NASA Astrophysics Data System (ADS)

    Liu, Yuanjie; Yu, Yude

    2016-11-01

    In spectroscopy data analysis, such as Raman spectra, X-ray diffraction, fluorescence and etc., baseline drift is a ubiquitous issue. In high speed testing which generating huge data, automatic baseline correction method is very important for efficient data processing. We will survey the algorithms from classical Shirley background to state-of-the-art methods to present a summation for this specific field. Both advantages and defects of each algorithm are scrutinized. To compare the algorithms with each other, experiments are also carried out under SVM gap gain criteria to show the performance quantitatively. Finally, a rank table of these methods is built and the suggestions for practical choice of adequate algorithms is provided in this paper.

  12. Respiratory motion correction in 3-D PET data with advanced optical flow algorithms.

    PubMed

    Dawood, Mohammad; Buther, Florian; Jiang, Xiaoyi; Schafers, Klaus P

    2008-08-01

    The problem of motion is well known in positron emission tomography (PET) studies. The PET images are formed over an elongated period of time. As the patients cannot hold breath during the PET acquisition, spatial blurring and motion artifacts are the natural result. These may lead to wrong quantification of the radioactive uptake. We present a solution to this problem by respiratory-gating the PET data and correcting the PET images for motion with optical flow algorithms. The algorithm is based on the combined local and global optical flow algorithm with modifications to allow for discontinuity preservation across organ boundaries and for application to 3-D volume sets. The superiority of the algorithm over previous work is demonstrated on software phantom and real patient data.

  13. Development of a decision tree to classify the most accurate tissue-specific tissue to plasma partition coefficient algorithm for a given compound.

    PubMed

    Yun, Yejin Esther; Cotton, Cecilia A; Edginton, Andrea N

    2014-02-01

    Physiologically based pharmacokinetic (PBPK) modeling is a tool used in drug discovery and human health risk assessment. PBPK models are mathematical representations of the anatomy, physiology and biochemistry of an organism and are used to predict a drug's pharmacokinetics in various situations. Tissue to plasma partition coefficients (Kp), key PBPK model parameters, define the steady-state concentration differential between tissue and plasma and are used to predict the volume of distribution. The experimental determination of these parameters once limited the development of PBPK models; however, in silico prediction methods were introduced to overcome this issue. The developed algorithms vary in input parameters and prediction accuracy, and none are considered standard, warranting further research. In this study, a novel decision-tree-based Kp prediction method was developed using six previously published algorithms. The aim of the developed classifier was to identify the most accurate tissue-specific Kp prediction algorithm for a new drug. A dataset consisting of 122 drugs was used to train the classifier and identify the most accurate Kp prediction algorithm for a certain physicochemical space. Three versions of tissue-specific classifiers were developed and were dependent on the necessary inputs. The use of the classifier resulted in a better prediction accuracy than that of any single Kp prediction algorithm for all tissues, the current mode of use in PBPK model building. Because built-in estimation equations for those input parameters are not necessarily available, this Kp prediction tool will provide Kp prediction when only limited input parameters are available. The presented innovative method will improve tissue distribution prediction accuracy, thus enhancing the confidence in PBPK modeling outputs.

  14. Application and assessment of a robust elastic motion correction algorithm to dynamic MRI.

    PubMed

    Herrmann, K-H; Wurdinger, S; Fischer, D R; Krumbein, I; Schmitt, M; Hermosillo, G; Chaudhuri, K; Krishnan, A; Salganicoff, M; Kaiser, W A; Reichenbach, J R

    2007-01-01

    The purpose of this study was to assess the performance of a new motion correction algorithm. Twenty-five dynamic MR mammography (MRM) data sets and 25 contrast-enhanced three-dimensional peripheral MR angiographic (MRA) data sets which were affected by patient motion of varying severeness were selected retrospectively from routine examinations. Anonymized data were registered by a new experimental elastic motion correction algorithm. The algorithm works by computing a similarity measure for the two volumes that takes into account expected signal changes due to the presence of a contrast agent while penalizing other signal changes caused by patient motion. A conjugate gradient method is used to find the best possible set of motion parameters that maximizes the similarity measures across the entire volume. Images before and after correction were visually evaluated and scored by experienced radiologists with respect to reduction of motion, improvement of image quality, disappearance of existing lesions or creation of artifactual lesions. It was found that the correction improves image quality (76% for MRM and 96% for MRA) and diagnosability (60% for MRM and 96% for MRA).

  15. Regression algorithm correcting for partial volume effects in arterial spin labeling MRI.

    PubMed

    Asllani, Iris; Borogovac, Ajna; Brown, Truman R

    2008-12-01

    Partial volume effects (PVE) are a consequence of limited spatial resolution in brain imaging. In arterial spin labeling (ASL) MRI, the problem is exacerbated by the nonlinear dependency of the ASL signal on magnetization contributions from each tissue within an imaged voxel. We have developed an algorithm that corrects for PVE in ASL imaging. The algorithm is based on a model that represents the voxel intensity as a weighted sum of pure tissue contribution, where the weighting coefficients are the tissue's fractional volume in the voxel. Using this algorithm, we were able to estimate cerebral blood flow (CBF) for gray matter (GM) and white matter (WM) independently. The average voxelwise ratio of GM to WM CBF was approximately 3.2, in good agreement with reports in the literature. As proof of concept, data from PVE-corrected method were compared with those from the conventional, PVE-uncorrected method. As hypothesized, the two yielded similar CBF values for voxels containing >95% GM and differed in proportion with the voxels' heterogeneity. More importantly, the GM CBF assessed with the PVE-corrected method was independent of the voxels' heterogeneity, implying that estimation of flow was unaffected by PVE. An example of application of this algorithm in motor-activation data is also given.

  16. A Comparative Dosimetric Analysis of the Effect of Heterogeneity Corrections Used in Three Treatment Planning Algorithms

    NASA Astrophysics Data System (ADS)

    Herrick, Andrea Celeste

    Successful treatment in radiation oncology relies on the evaluation of a plan for each individual patient based on delivering the maximum dose to the tumor while sparing the surrounding normal tissue (organs at risk) in the patient. Organs at risk (OAR) typically considered include the heart, the spinal cord, healthy lung tissue, and any other organ in the vicinity of the target that is not affected by the disease being treated. Depending on the location of the tumor and its proximity to these OARs, several plans may be created and evaluated in order to assess which "solution" most closely meets all of the specified criteria. In order to successfully review a treatment plan and take the correct course of action, a physician needs to rely on the computer model (treatment planning algorithm) of dose distribution to reconstruct CT scan data to proceed with the plan that best achieves all of the goals. There are many available treatment planning systems from which a Radiation Oncology center can choose from. While the radiation interactions considered are identical among clinics, the way the chosen algorithm handles these interactions can vary immensely. The goal of this study was to provide a comparison between two commonly used treatment planning systems (Pinnacle and Eclipse) and their associated dose calculation algorithms. In order to this, heterogeneity correction models were evaluated via test plans, and the effects of going from heterogeneity uncorrected patient representation to a heterogeneity correction representation were studied. The results of this study indicate that the actual dose delivered to the patient varies greatly between treatment planning algorithms in areas of low density tissue such as in the lungs. Although treatment planning algorithms are attempting to come to the same result with heterogeneity corrections, the reality is that the results depend strongly on the algorithm used in the situations studied. While the Anisotropic Analytic Method

  17. Image nonlinearity and non-uniformity corrections using Papoulis - Gerchberg algorithm in gamma imaging systems

    NASA Astrophysics Data System (ADS)

    Shemer, A.; Schwarz, A.; Gur, E.; Cohen, E.; Zalevsky, Z.

    2015-04-01

    In this paper, the authors describe a novel technique for image nonlinearity and non-uniformity corrections in imaging systems based on gamma detectors. The limitation of the gamma detector prevents the producing of high quality images due to the radionuclide distribution. This problem causes nonlinearity and non-uniformity distortions in the obtained image. Many techniques have been developed to correct or compensate for these image artifacts using complex calibration processes. The presented method is based on the Papoulis - Gerchberg(PG) iterative algorithm and is obtained without need of detector calibration, tuning process or using any special test phantom.

  18. A residual range cell migration correction algorithm for bistatic forward-looking SAR

    NASA Astrophysics Data System (ADS)

    Pu, Wei; Huang, Yulin; Wu, Junjie; Yang, Jianyu; Li, Wenchao

    2016-12-01

    For bistatic forward-looking synthetic aperture radar (BFSAR), images are often blurred by uncompensated radar motion errors. To get refocused images, autofocus is a useful postprocessing technique. However, a severe drawback of the autofocus algorithms is that they are only capable of removing one-dimensional azimuth phase errors. In BFSAR, motion errors and approximations of imaging algorithms introduce residual range cell migration (RCM) on BFSAR data as well. When residual RCM is within a range resolution cell, it can be neglected. However, the residual migration, which exceeds a range cell, is increasingly encountered as resolution becomes finer and finer. A novel residual RCM correction method is proposed in this paper. By fitting the low-frequency phase difference of adjacent azimuth cells, residual RCM of each azimuth cell can be corrected precisely and effectively. Simulations and real data experiments are carried out to validate the effectiveness of the proposed method.

  19. A fast beam hardening correction method incorporated in a filtered back-projection based MAP algorithm

    NASA Astrophysics Data System (ADS)

    Luo, Shouhua; Wu, Huazhen; Sun, Yi; Li, Jing; Li, Guang; Gu, Ning

    2017-03-01

    The beam hardening effect can induce strong artifacts in CT images, which result in severely deteriorated image quality with incorrect intensities (CT numbers). This paper develops an effective and efficient beam hardening correction algorithm incorporated in a filtered back-projection based maximum a posteriori (BHC-FMAP). In the proposed algorithm, the beam hardening effect is modeled and incorporated into the forward-projection of the MAP to suppress beam hardening induced artifacts, and the image update process is performed by Feldkamp–Davis–Kress method based back-projection to speed up the convergence. The proposed BHC-FMAP approach does not require information about the beam spectrum or the material properties, or any additional segmentation operation. The proposed method was qualitatively and quantitatively evaluated using both phantom and animal projection data. The experimental results demonstrate that the BHC-FMAP method can efficiently provide a good correction of beam hardening induced artefacts.

  20. A fast beam hardening correction method incorporated in a filtered back-projection based MAP algorithm.

    PubMed

    Luo, Shouhua; Wu, Huazhen; Sun, Yi; Li, Jing; Li, Guang; Gu, Ning

    2017-03-07

    The beam hardening effect can induce strong artifacts in CT images, which result in severely deteriorated image quality with incorrect intensities (CT numbers). This paper develops an effective and efficient beam hardening correction algorithm incorporated in a filtered back-projection based maximum a posteriori (BHC-FMAP). In the proposed algorithm, the beam hardening effect is modeled and incorporated into the forward-projection of the MAP to suppress beam hardening induced artifacts, and the image update process is performed by Feldkamp-Davis-Kress method based back-projection to speed up the convergence. The proposed BHC-FMAP approach does not require information about the beam spectrum or the material properties, or any additional segmentation operation. The proposed method was qualitatively and quantitatively evaluated using both phantom and animal projection data. The experimental results demonstrate that the BHC-FMAP method can efficiently provide a good correction of beam hardening induced artefacts.

  1. Closed Loop, DM Diversity-based, Wavefront Correction Algorithm for High Contrast Imaging Systems

    NASA Technical Reports Server (NTRS)

    Give'on, Amir; Belikov, Ruslan; Shaklan, Stuart; Kasdin, Jeremy

    2007-01-01

    High contrast imaging from space relies on coronagraphs to limit diffraction and a wavefront control systems to compensate for imperfections in both the telescope optics and the coronagraph. The extreme contrast required (up to 10(exp -10) for terrestrial planets) puts severe requirements on the wavefront control system, as the achievable contrast is limited by the quality of the wavefront. This paper presents a general closed loop correction algorithm for high contrast imaging coronagraphs by minimizing the energy in a predefined region in the image where terrestrial planets could be found. The estimation part of the algorithm reconstructs the complex field in the image plane using phase diversity caused by the deformable mirror. This method has been shown to achieve faster and better correction than classical speckle nulling.

  2. Performance evaluation of operational atmospheric correction algorithms over the East China Seas

    NASA Astrophysics Data System (ADS)

    He, Shuangyan; He, Mingxia; Fischer, Jürgen

    2017-01-01

    To acquire high-quality operational data products for Chinese in-orbit and scheduled ocean color sensors, the performances of two operational atmospheric correction (AC) algorithms (ESA MEGS 7.4.1 and NASA SeaDAS 6.1) were evaluated over the East China Seas (ECS) using MERIS data. The spectral remote sensing reflectance R rs(λ), aerosol optical thickness (AOT), and Ångström exponent (α) retrieved using the two algorithms were validated using in situ measurements obtained between May 2002 and October 2009. Match-ups of R rs, AOT, and α between the in situ and MERIS data were obtained through strict exclusion criteria. Statistical analysis of R rs(λ) showed a mean percentage difference (MPD) of 9%-13% in the 490-560 nm spectral range, and significant overestimation was observed at 413 nm (MPD>72%). The AOTs were overestimated (MPD>32%), and although the ESA algorithm outperformed the NASA algorithm in the blue-green bands, the situation was reversed in the red-near-infrared bands. The value of α was obviously underestimated by the ESA algorithm (MPD=41%) but not by the NASA algorithm (MPD=35%). To clarify why the NASA algorithm performed better in the retrieval of α, scatter plots of the α single scattering albedo (SSA) density were prepared. These α-SSA density scatter plots showed that the applicability of the aerosol models used by the NASA algorithm over the ECS is better than that used by the ESA algorithm, although neither aerosol model is suitable for the ECS region. The results of this study provide a reference to both data users and data agencies regarding the use of operational data products and the investigation into the improvement of current AC schemes over the ECS.

  3. Comparison and evaluation of atmospheric correction algorithms of QUAC, DOS, and FLAASH for HICO hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Shi, Liangliang; Mao, Zhihua; Chen, Peng; Han, Sha'ou; Gong, Fang; Zhu, Qiankun

    2016-10-01

    In order to obtain the spectral information of objects and improve the retrieval of quantitative parameters from remotely sensing data accurately on land or over water bodies, atmospheric correction is a vital step, certainly, it is also a prerequisite to hyperspectral imagery data analysis approaches. On the base of previous studies, the atmospheric correction algorithms were divided to two categories: image-based empirical and model-based correction methods. The Quick Atmospheric Correction (QUAC) and Dark Object Subtraction (DOS) methods belong to the empirical or semiempirical methods, however, the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercube (FLAASH) method was developed from the radiative transfer model. In this paper, we initially evaluated the performance from Hyperspectral Imager for the Coastal Ocean (HICO) of 16 Nov 2013 using QUAC, DOS, and MODTRAN integrated in FLAASH, and compared the results of these correction methods with in situ data. The results indicate that the method of FLAASH model performs much better than DOS and QUAC in atmospheric correction for HICO hyperspectral imagery, although the DOS and QUAC method is conducted more easily and do not require inputs of complex parameters.

  4. The Design of Flux-Corrected Transport (FCT) Algorithms for Structured Grids

    NASA Astrophysics Data System (ADS)

    Zalesak, Steven T.

    A given flux-corrected transport (FCT) algorithm consists of three components: (1) a high order algorithm to which it reduces in smooth parts of the flow; (2) a low order algorithm to which it reduces in parts of the flow devoid of smoothness; and (3) a flux limiter which calculates the weights assigned to the high and low order fluxes in various regions of the flow field. One way of optimizing an FCT algorithm is to optimize each of these three components individually. We present some of the ideas that have been developed over the past 30 years toward this end. These include the use of very high order spatial operators in the design of the high order fluxes, non-clipping flux limiters, the appropriate choice of constraint variables in the critical flux-limiting step, and the implementation of a "failsafe" flux-limiting strategy. This chapter confines itself to the design of FCT algorithms for structured grids, using a finite volume formalism, for this is the area with which the present author is most familiar. The reader will find excellent material on the design of FCT algorithms for unstructured grids, using both finite volume and finite element formalisms, in the chapters by Professors Löhner, Baum, Kuzmin, Turek, and Möller in the present volume.

  5. Reconstruction algorithm for polychromatic CT imaging: application to beam hardening correction

    NASA Technical Reports Server (NTRS)

    Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.

    2000-01-01

    This paper presents a new reconstruction algorithm for both single- and dual-energy computed tomography (CT) imaging. By incorporating the polychromatic characteristics of the X-ray beam into the reconstruction process, the algorithm is capable of eliminating beam hardening artifacts. The single energy version of the algorithm assumes that each voxel in the scan field can be expressed as a mixture of two known substances, for example, a mixture of trabecular bone and marrow, or a mixture of fat and flesh. These assumptions are easily satisfied in a quantitative computed tomography (QCT) setting. We have compared our algorithm to three commonly used single-energy correction techniques. Experimental results show that our algorithm is much more robust and accurate. We have also shown that QCT measurements obtained using our algorithm are five times more accurate than that from current QCT systems (using calibration). The dual-energy mode does not require any prior knowledge of the object in the scan field, and can be used to estimate the attenuation coefficient function of unknown materials. We have tested the dual-energy setup to obtain an accurate estimate for the attenuation coefficient function of K2 HPO4 solution.

  6. Correcting encoder interpolation error on the Green Bank Telescope using an iterative model based identification algorithm

    NASA Astrophysics Data System (ADS)

    Franke, Timothy; Weadon, Tim; Ford, John; Garcia-Sanz, Mario

    2015-10-01

    Various forms of measurement errors limit telescope tracking performance in practice. A new method for identifying the correcting coefficients for encoder interpolation error is developed. The algorithm corrects the encoder measurement by identifying a harmonic model of the system and using that model to compute the necessary correction parameters. The approach improves upon others by explicitly modeling the unknown dynamics of the structure and controller and by not requiring a separate system identification to be performed. Experience gained from pin-pointing the source of encoder error on the Green Bank Radio Telescope (GBT) is presented. Several tell-tale indicators of encoder error are discussed. Experimental data from the telescope, tested with two different encoders, are presented. Demonstration of the identification methodology on the GBT as well as details of its implementation are discussed. A root mean square tracking error reduction from 0.68 arc seconds to 0.21 arc sec was achieved by changing encoders and was further reduced to 0.10 arc sec with the calibration algorithm. In particular, the ubiquity of this error source is shown and how, by careful correction, it is possible to go beyond the advertised accuracy of an encoder.

  7. A baseline correction algorithm for Raman spectroscopy by adaptive knots B-spline

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Fan, Xian-guang; Xu, Ying-jie; Wang, Xiu-fen; He, Hao; Zuo, Yong

    2015-11-01

    The Raman spectroscopy technique is a powerful and non-invasive technique for molecular fingerprint detection which has been widely used in many areas, such as food safety, drug safety, and environmental testing. But Raman signals can be easily corrupted by a fluorescent background, therefore we presented a baseline correction algorithm to suppress the fluorescent background in this paper. In this algorithm, the background of the Raman signal was suppressed by fitting a curve called a baseline using a cyclic approximation method. Instead of the traditional polynomial fitting, we used the B-spline as the fitting algorithm due to its advantages of low-order and smoothness, which can avoid under-fitting and over-fitting effectively. In addition, we also presented an automatic adaptive knot generation method to replace traditional uniform knots. This algorithm can obtain the desired performance for most Raman spectra with varying baselines without any user input or preprocessing step. In the simulation, three kinds of fluorescent background lines were introduced to test the effectiveness of the proposed method. We showed that two real Raman spectra (parathion-methyl and colza oil) can be detected and their baselines were also corrected by the proposed method.

  8. Adaptation of a Hyperspectral Atmospheric Correction Algorithm for Multi-spectral Ocean Color Data in Coastal Waters. Chapter 3

    NASA Technical Reports Server (NTRS)

    Gao, Bo-Cai; Montes, Marcos J.; Davis, Curtiss O.

    2003-01-01

    This SIMBIOS contract supports several activities over its three-year time-span. These include certain computational aspects of atmospheric correction, including the modification of our hyperspectral atmospheric correction algorithm Tafkaa for various multi-spectral instruments, such as SeaWiFS, MODIS, and GLI. Additionally, since absorbing aerosols are becoming common in many coastal areas, we are making the model calculations to incorporate various absorbing aerosol models into tables used by our Tafkaa atmospheric correction algorithm. Finally, we have developed the algorithms to use MODIS data to characterize thin cirrus effects on aerosol retrieval.

  9. Quantitative (177)Lu SPECT imaging using advanced correction algorithms in non-reference geometry.

    PubMed

    D'Arienzo, M; Cozzella, M L; Fazio, A; De Felice, P; Iaccarino, G; D'Andrea, M; Ungania, S; Cazzato, M; Schmidt, K; Kimiaei, S; Strigari, L

    2016-12-01

    Peptide receptor therapy with (177)Lu-labelled somatostatin analogues is a promising tool in the management of patients with inoperable or metastasized neuroendocrine tumours. The aim of this work was to perform accurate activity quantification of (177)Lu in complex anthropomorphic geometry using advanced correction algorithms. Acquisitions were performed on the higher (177)Lu photopeak (208keV) using a Philips IRIX gamma camera provided with medium-energy collimators. System calibration was performed using a 16mL Jaszczak sphere surrounded by non-radioactive water. Attenuation correction was performed using μ-maps derived from CT data, while scatter and septal penetration corrections were performed using the transmission-dependent convolution-subtraction method. SPECT acquisitions were finally corrected for dead time and partial volume effects. Image analysis was performed using the commercial QSPECT software. The quantitative SPECT approach was validated on an anthropomorphic phantom provided with a home-made insert simulating a hepatic lesion. Quantitative accuracy was studied using three tumour-to-background activity concentration ratios (6:1, 9:1, 14:1). For all acquisitions, the recovered total activity was within 12% of the calibrated activity both in the background region and in the tumour. Using a 6:1 tumour-to-background ratio the recovered total activity was within 2% in the tumour and within 5% in the background. Partial volume effects, if not properly accounted for, can lead to significant activity underestimations in clinical conditions. In conclusion, accurate activity quantification of (177)Lu can be obtained if activity measurements are performed with equipment traceable to primary standards, advanced correction algorithms are used and acquisitions are performed at the 208keV photopeak using medium-energy collimators.

  10. Beam-centric algorithm for pretreatment patient position correction in external beam radiation therapy

    SciTech Connect

    Bose, Supratik; Shukla, Himanshu; Maltz, Jonathan

    2010-05-15

    Purpose: In current image guided pretreatment patient position adjustment methods, image registration is used to determine alignment parameters. Since most positioning hardware lacks the full six degrees of freedom (DOF), accuracy is compromised. The authors show that such compromises are often unnecessary when one models the planned treatment beams as part of the adjustment calculation process. The authors present a flexible algorithm for determining optimal realizable adjustments for both step-and-shoot and arc delivery methods. Methods: The beam shape model is based on the polygonal intersection of each beam segment with the plane in pretreatment image volume that passes through machine isocenter perpendicular to the central axis of the beam. Under a virtual six-DOF correction, ideal positions of these polygon vertices are computed. The proposed method determines the couch, gantry, and collimator adjustments that minimize the total mismatch of all vertices over all segments with respect to their ideal positions. Using this geometric error metric as a function of the number of available DOF, the user may select the most desirable correction regime. Results: For a simulated treatment plan consisting of three equally weighted coplanar fixed beams, the authors achieve a 7% residual geometric error (with respect to the ideal correction, considered 0% error) by applying gantry rotation as well as translation and isocentric rotation of the couch. For a clinical head-and-neck intensity modulated radiotherapy plan with seven beams and five segments per beam, the corresponding error is 6%. Correction involving only couch translation (typical clinical practice) leads to a much larger 18% mismatch. Clinically significant consequences of more accurate adjustment are apparent in the dose volume histograms of target and critical structures. Conclusions: The algorithm achieves improvements in delivery accuracy using standard delivery hardware without significantly increasing

  11. ECHO: a reference-free short-read error correction algorithm.

    PubMed

    Kao, Wei-Chun; Chan, Andrew H; Song, Yun S

    2011-07-01

    Developing accurate, scalable algorithms to improve data quality is an important computational challenge associated with recent advances in high-throughput sequencing technology. In this study, a novel error-correction algorithm, called ECHO, is introduced for correcting base-call errors in short-reads, without the need of a reference genome. Unlike most previous methods, ECHO does not require the user to specify parameters of which optimal values are typically unknown a priori. ECHO automatically sets the parameters in the assumed model and estimates error characteristics specific to each sequencing run, while maintaining a running time that is within the range of practical use. ECHO is based on a probabilistic model and is able to assign a quality score to each corrected base. Furthermore, it explicitly models heterozygosity in diploid genomes and provides a reference-free method for detecting bases that originated from heterozygous sites. On both real and simulated data, ECHO is able to improve the accuracy of previous error-correction methods by several folds to an order of magnitude, depending on the sequence coverage depth and the position in the read. The improvement is most pronounced toward the end of the read, where previous methods become noticeably less effective. Using a whole-genome yeast data set, it is demonstrated here that ECHO is capable of coping with nonuniform coverage. Also, it is shown that using ECHO to perform error correction as a preprocessing step considerably facilitates de novo assembly, particularly in the case of low-to-moderate sequence coverage depth.

  12. Artificial-neural-network-based atmospheric correction algorithm: application to MERIS data

    NASA Astrophysics Data System (ADS)

    Schroeder, Thomas; Fischer, Juergen; Schaale, Michael; Fell, Frank

    2003-05-01

    After the successful launch of the Medium Resolution Imaging Spectrometer (MERIS) on board of the European Space Agency (ESA) Environmental Satellite (ENVISAT) on March 1st 2002, first MERIS data are available for validation purposes. The primary goal of the MERIS mission is to measure the color of the sea with respect to oceanic biology and marine water quality. We present an atmospheric correction algorithm for case-I waters based on the inverse modeling of radiative transfer calculations by artificial neural networks. The proposed correction scheme accounts for multiple scattering and high concentrations of absorbing aerosols (e.g. desert dust). Above case-I waters, the measured near infrared path radiance at Top-Of-Atmosphere (TOA) is assumed to originate from atmospheric processes only and is used to determine the aerosol properties with the help of an additional classification test in the visible spectral region. A synthetic data set is generated from radiative transfer simulations and is subsequently used to train different Multi-Layer-Perceptrons (MLP). The atmospheric correction scheme consists of two steps. First a set of MLPs is used to derive the aerosol optical thickness (AOT) and the aerosol type for each pixel. Second these quantities are fed into a further MLP trained with simulated data for various chlorophyll concentrations to perform the radiative transfer inversion and to obtain the water-leaving radiance. In this work we apply the inversion algorithm to a MERIS Level 1b data track covering the Indian Ocean along the west coast of Madagascar.

  13. Line end shortening and application of novel correction algorithms in e-beam direct write

    NASA Astrophysics Data System (ADS)

    Freitag, Martin; Choi, Kang-Hoon; Gutsch, Manuela; Hohle, Christoph

    2011-03-01

    For the manufacturing of semiconductor technologies following the ITRS roadmap, we will face the nodes well below 32nm half pitch in the next 2~3 years. Despite being able to achieve the required resolution, which is now possible with electron beam direct write variable shaped beam (EBDW VSB) equipment and resists, it becomes critical to precisely reproduce dense line space patterns onto a wafer. This exposed pattern must meet the targets from the layout in both dimensions (horizontally and vertically). For instance, the end of a line must be printed in its entire length to allow a later placed contact to be able to land on it. Up to now, the control of printed patterns such as line ends is achieved by a proximity effect correction (PEC) which is mostly based on a dose modulation. This investigation of the line end shortening (LES) includes multiple novel approaches, also containing an additional geometrical correction, to push the limits of the available data preparation algorithms and the measurement. The designed LES test patterns, which aim to characterize the status of LES in a quick and easy way, were exposed and measured at Fraunhofer Center Nanoelectronic Technologies (CNT) using its state of the art electron beam direct writer and CD-SEM. Simulation and exposure results with the novel LES correction algorithms applied to the test pattern and a large production like pattern in the range of our target CDs in dense line space features smaller than 40nm will be shown.

  14. Pile-up correction by Genetic Algorithm and Artificial Neural Network

    NASA Astrophysics Data System (ADS)

    Kafaee, M.; Saramad, S.

    2009-08-01

    Pile-up distortion is a common problem for high counting rates radiation spectroscopy in many fields such as industrial, nuclear and medical applications. It is possible to reduce pulse pile-up using hardware-based pile-up rejections. However, this phenomenon may not be eliminated completely by this approach and the spectrum distortion caused by pile-up rejection can be increased as well. In addition, inaccurate correction or rejection of pile-up artifacts in applications such as energy dispersive X-ray (EDX) spectrometers can lead to losses of counts, will give poor quantitative results and even false element identification. Therefore, it is highly desirable to use software-based models to predict and correct any recognized pile-up signals in data acquisition systems. The present paper describes two new intelligent approaches for pile-up correction; the Genetic Algorithm (GA) and Artificial Neural Networks (ANNs). The validation and testing results of these new methods have been compared, which shows excellent agreement with the measured data with 60Co source and NaI detector. The Monte Carlo simulation of these new intelligent algorithms also shows their advantages over hardware-based pulse pile-up rejection methods.

  15. Aberration correction in an adaptive free-space optical interconnect with an error diffusion algorithm

    NASA Astrophysics Data System (ADS)

    Gil-Leyva, Diego; Robertson, Brian; Wilkinson, Timothy D.; Henderson, Charley J.

    2006-06-01

    Aberration correction within a free-space optical interconnect based on a spatial light modulator for beam steering and holographic wavefront correction is presented. The wavefront sensing technique is based on an extension of a modal wavefront sensor described by Neil et al. [J. Opt. Soc. Am. A 17, 1098 (2000)], which uses a diffractive element. In this analysis such a wavefront sensor is adapted with an error diffusion algorithm that yields a low reconstruction error and fast reconfigurability. Improvement of the beam propagation quality (Strehl ratio) for different channels across the input plane is achieved. However, due to the space invariancy of the system, a trade-off among the beam propagation quality for channels is obtained. Experimental results are presented and discussed.

  16. EM-IntraSPECT algorithm with ordered subsets (OSEMIS) for nonuniform attenuation correction in cardiac imaging

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Echeruo, Ifeanyi; Solgado, Roberto B.; Hardikar, Amol S.; Bowsher, James E.; Feiglin, David H.; Thomas, Frank D.; Lipson, Edward; Coman, Ioana L.

    2002-05-01

    Performance of the EM-IntraSPECT (EMIS) algorithm with ordered subsets (OSEMIS) for non-uniform attenuation correction in the chest was assessed. EMIS is a maximum- likelihood expectation maximization(MLEM) algorithm for simultaneously estimating SPECT emission and attenuation parameters from emission data alone. EMIS uses the activity within the patient as transmission tomography sources, with which attenuation coefficients can be estimated. However, the reconstruction time is long. The new algorithm, OSEMIS, is a modified EMIS algorithm based on ordered subsets. Emission Tc-99m SPECT data were acquired over 360 degree(s) in non-circular orbit from a physical chest phantom using clinical protocol. Both a normal and a defect heart were considered. OSEMIS was evaluated in comparison to EMIS and a conventional MLEM with a fixed uniform attenuation map. Wide ranges of image measures were evaluated, including noise, log-likelihood, and region quantification. Uniformity was assessed from bull's eye plots of the reconstructed images. For the appropriate subset size, OSEMIS yielded essentially the same images as EMIS and better than MLEM, but required only one-tenth as many iterations. Consequently, adequate images were available in about fifteen iterations.

  17. Algorithm for Atmospheric and Glint Corrections of Satellite Measurements of Ocean Pigment

    NASA Technical Reports Server (NTRS)

    Fraser, Robert S.; Mattoo, Shana; Yeh, Eueng-Nan; McClain, C. R.

    1997-01-01

    An algorithm is developed to correct satellite measurements of ocean color for atmospheric and surface reflection effects. The algorithm depends on taking the difference between measured and tabulated radiances for deriving water-leaving radiances. 'ne tabulated radiances are related to the measured radiance where the water-leaving radiance is negligible (670 nm). The tabulated radiances are calculated for rough surface reflection, polarization of the scattered light, and multiple scattering. The accuracy of the tables is discussed. The method is validated by simulating the effect of different wind speeds than that for which the lookup table is calculated, and aerosol models different from the maritime model for which the table is computed. The derived water-leaving radiances are accurate enough to compute the pigment concentration with an error of less than q 15% for wind speeds of 6 and 10 m/s and an urban atmosphere with aerosol optical thickness of 0.20 at lambda 443 nm and decreasing to 0.10 at lambda 670 nm. The pigment accuracy is less for wind speeds less than 6 m/s and is about 30% for a model with aeolian dust. On the other hand, in a preliminary comparison with coastal zone color scanner (CZCS) measurements this algorithm and the CZCS operational algorithm produced values of pigment concentration in one image that agreed closely.

  18. Optimized Seizure Detection Algorithm: A Fast Approach for Onset of Epileptic in EEG Signals Using GT Discriminant Analysis and K-NN Classifier

    PubMed Central

    Rezaee, Kh.; Azizi, E.; Haddadnia, J.

    2016-01-01

    Background Epilepsy is a severe disorder of the central nervous system that predisposes the person to recurrent seizures. Fifty million people worldwide suffer from epilepsy; after Alzheimer’s and stroke, it is the third widespread nervous disorder. Objective In this paper, an algorithm to detect the onset of epileptic seizures based on the analysis of brain electrical signals (EEG) has been proposed. 844 hours of EEG were recorded form 23 pediatric patients consecutively with 163 occurrences of seizures. Signals had been collected from Children’s Hospital Boston with a sampling frequency of 256 Hz through 18 channels in order to assess epilepsy surgery. By selecting effective features from seizure and non-seizure signals of each individual and putting them into two categories, the proposed algorithm detects the onset of seizures quickly and with high sensitivity. Method In this algorithm, L-sec epochs of signals are displayed in form of a third-order tensor in spatial, spectral and temporal spaces by applying wavelet transform. Then, after applying general tensor discriminant analysis (GTDA) on tensors and calculating mapping matrix, feature vectors are extracted. GTDA increases the sensitivity of the algorithm by storing data without deleting them. Finally, K-Nearest neighbors (KNN) is used to classify the selected features. Results The results of simulating algorithm on algorithm standard dataset shows that the algorithm is capable of detecting 98 percent of seizures with an average delay of 4.7 seconds and the average error rate detection of three errors in 24 hours. Conclusion Today, the lack of an automated system to detect or predict the seizure onset is strongly felt. PMID:27672628

  19. Parallel algorithms of relative radiometric correction for images of TH-1 satellite

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Zhang, Tingtao; Cheng, Jiasheng; Yang, Tao

    2014-05-01

    The first generation of transitive stereo-metric satellites in China, TH-1 Satellite, is able to gain stereo images of three-line-array with resolution of 5 meters, multispectral images of 10 meters, and panchromatic high resolution images of 2 meters. The procedure between level 0 and level 1A of high resolution images is so called relative radiometric correction (RRC for short). The processing algorithm of high resolution images, with large volumes of data, is complicated and time consuming. In order to bring up the processing speed, people in industry commonly apply parallel processing techniques based on CPU or GPU. This article firstly introduces the whole process and each step of the algorithm - that is in application - of RRC for high resolution images in level 0; secondly, the theory and characteristics of MPI (Message Passing Interface) and OpenMP (Open Multi-Processing) parallel programming techniques is briefly described, as well as the superiority for parallel technique in image processing field; thirdly, aiming at each step of the algorithm in application and based on MPI+OpenMP hybrid paradigm, the parallelizability and the strategies of parallelism for three processing steps: Radiometric Correction, Splicing Pieces of TDICCD (Time Delay Integration Charge-Coupled Device) and Gray Level Adjustment among pieces of TDICCD are deeply discussed, and furthermore, deducts the theoretical acceleration rates of each step and the one of whole procedure, according to the processing styles and independence of calculation; for the step Splicing Pieces of TDICCD, two different strategies of parallelism are proposed, which are to be chosen with consideration of hardware capabilities; finally, series of experiments are carried out to verify the parallel algorithms by applying 2-meter panchromatic high resolution images of TH-1 Satellite, and the experimental results are analyzed. Strictly on the basis of former parallel algorithms, the programs in the experiments

  20. Scattering correction algorithm for neutron radiography and tomography tested at facilities with different beam characteristics

    NASA Astrophysics Data System (ADS)

    Hassanein, René; de Beer, Frikkie; Kardjilov, Nikolay; Lehmann, Eberhard

    2006-11-01

    A precise quantitative analysis with the neutron radiography technique of materials with a high-neutron scattering cross section, imaged at small distances from the detector, is impossible if the scattering contribution from the investigated material onto the detector is not eliminated in the right way. Samples with a high-neutron scattering cross section, e.g. hydrogenous materials such as water, cause a significant scattering component in their radiographs. Background scattering, spectral effects and detector characteristics are identified as additional causes for disturbances. A scattering correction algorithm based on Monte Carlo simulations has been developed and implemented to take these effects into account. The corrected radiographs can be used for a subsequent tomographic reconstruction. From the results one can obtain quantitative information, in order to detect e.g. inhomogeneity patterns within materials, or to measure differences of the mass thickness in these materials. Within an IAEA-CRP collaboration the algorithms have been tested for applicability on results obtained at the South African SANRAD facility at Necsa, the Swiss NEUTRA facilities at PSI as well as the German CONRAD facility at HMI, all with different initial neutron spectra. Results of a set of dedicated neutron radiography experiments are being reported.

  1. A background correction algorithm for Van Allen Probes MagEIS electron flux measurements

    SciTech Connect

    Claudepierre, S. G.; O'Brien, T. P.; Blake, J. B.; Fennell, J. F.; Roeder, J. L.; Clemmons, J. H.; Looper, M. D.; Mazur, J. E.; Mulligan, T. M.; Spence, H. E.; Reeves, G. D.; Friedel, R. H. W.; Henderson, M. G.; Larsen, B. A.

    2015-07-14

    We describe an automated computer algorithm designed to remove background contamination from the Van Allen Probes Magnetic Electron Ion Spectrometer (MagEIS) electron flux measurements. We provide a detailed description of the algorithm with illustrative examples from on-orbit data. We find two primary sources of background contamination in the MagEIS electron data: inner zone protons and bremsstrahlung X-rays generated by energetic electrons interacting with the spacecraft material. Bremsstrahlung X-rays primarily produce contamination in the lower energy MagEIS electron channels (~30–500 keV) and in regions of geospace where multi-M eV electrons are present. Inner zone protons produce contamination in all MagEIS energy channels at roughly L < 2.5. The background-corrected MagEIS electron data produce a more accurate measurement of the electron radiation belts, as most earlier measurements suffer from unquantifiable and uncorrectable contamination in this harsh region of the near-Earth space environment. These background-corrected data will also be useful for spacecraft engineering purposes, providing ground truth for the near-Earth electron environment and informing the next generation of spacecraft design models (e.g., AE9).

  2. A background correction algorithm for Van Allen Probes MagEIS electron flux measurements

    DOE PAGES

    Claudepierre, S. G.; O'Brien, T. P.; Blake, J. B.; ...

    2015-07-14

    We describe an automated computer algorithm designed to remove background contamination from the Van Allen Probes Magnetic Electron Ion Spectrometer (MagEIS) electron flux measurements. We provide a detailed description of the algorithm with illustrative examples from on-orbit data. We find two primary sources of background contamination in the MagEIS electron data: inner zone protons and bremsstrahlung X-rays generated by energetic electrons interacting with the spacecraft material. Bremsstrahlung X-rays primarily produce contamination in the lower energy MagEIS electron channels (~30–500 keV) and in regions of geospace where multi-M eV electrons are present. Inner zone protons produce contamination in all MagEIS energymore » channels at roughly L < 2.5. The background-corrected MagEIS electron data produce a more accurate measurement of the electron radiation belts, as most earlier measurements suffer from unquantifiable and uncorrectable contamination in this harsh region of the near-Earth space environment. These background-corrected data will also be useful for spacecraft engineering purposes, providing ground truth for the near-Earth electron environment and informing the next generation of spacecraft design models (e.g., AE9).« less

  3. Direct cone-beam cardiac reconstruction algorithm with cardiac banding artifact correction

    SciTech Connect

    Taguchi, Katsuyuki; Chiang, Beshan S.; Hein, Ilmar A.

    2006-02-15

    Multislice helical computed tomography (CT) is a promising noninvasive technique for coronary artery imaging. Various factors can cause inconsistencies in cardiac CT data, which can result in degraded image quality. These inconsistencies may be the result of the patient physiology (e.g., heart rate variations), the nature of the data (e.g., cone-angle), or the reconstruction algorithm itself. An algorithm which provides the best temporal resolution for each slice, for example, often provides suboptimal image quality for the entire volume since the cardiac temporal resolution (TRc) changes from slice to slice. Such variations in TRc can generate strong banding artifacts in multi-planar reconstruction images or three-dimensional images. Discontinuous heart walls and coronary arteries may compromise the accuracy of the diagnosis. A {beta}-blocker is often used to reduce and stabilize patients' heart rate but cannot eliminate the variation. In order to obtain robust and optimal image quality, a software solution that increases the temporal resolution and decreases the effect of heart rate is highly desirable. This paper proposes an ECG-correlated direct cone-beam reconstruction algorithm (TCOT-EGR) with cardiac banding artifact correction (CBC) and disconnected projections redundancy compensation technique (DIRECT). First the theory and analytical model of the cardiac temporal resolution is outlined. Next, the performance of the proposed algorithms is evaluated by using computer simulations as well as patient data. It will be shown that the proposed algorithms enhance the robustness of the image quality against inconsistencies by guaranteeing smooth transition of heart cycles used in reconstruction.

  4. Prediction of Endocrine System Affectation in Fisher 344 Rats by Food Intake Exposed with Malathion, Applying Naïve Bayes Classifier and Genetic Algorithms

    PubMed Central

    Mora, Juan David Sandino; Hurtado, Darío Amaya; Sandoval, Olga Lucía Ramos

    2016-01-01

    Background: Reported cases of uncontrolled use of pesticides and its produced effects by direct or indirect exposition, represent a high risk for human health. Therefore, in this paper, it is shown the results of the development and execution of an algorithm that predicts the possible effects in endocrine system in Fisher 344 (F344) rats, occasioned by ingestion of malathion. Methods: It was referred to ToxRefDB database in which different case studies in F344 rats exposed to malathion were collected. The experimental data were processed using Naïve Bayes (NB) machine learning classifier, which was subsequently optimized using genetic algorithms (GAs). The model was executed in an application with a graphical user interface programmed in C#. Results: There was a tendency to suffer bigger alterations, increasing levels in the parathyroid gland in dosages between 4 and 5 mg/kg/day, in contrast to the thyroid gland for doses between 739 and 868 mg/kg/day. It was showed a greater resistance for females to contract effects on the endocrine system by the ingestion of malathion. Females were more susceptible to suffer alterations in the pituitary gland with exposure times between 3 and 6 months. Conclusions: The prediction model based on NB classifiers allowed to analyze all the possible combinations of the studied variables and improving its accuracy using GAs. Excepting the pituitary gland, females demonstrated better resistance to contract effects by increasing levels on the rest of endocrine system glands. PMID:27833725

  5. Algorithms for calculating mass-velocity and Darwin relativistic corrections with n-electron explicitly correlated Gaussians with shifted centers

    NASA Astrophysics Data System (ADS)

    Stanke, Monika; Palikot, Ewa; Adamowicz, Ludwik

    2016-05-01

    Algorithms for calculating the leading mass-velocity (MV) and Darwin (D) relativistic corrections are derived for electronic wave functions expanded in terms of n-electron explicitly correlated Gaussian functions with shifted centers and without pre-exponential angular factors. The algorithms are implemented and tested in calculations of MV and D corrections for several points on the ground-state potential energy curves of the H2 and LiH molecules. The algorithms are general and can be applied in calculations of systems with an arbitrary number of electrons.

  6. Algorithms for calculating mass-velocity and Darwin relativistic corrections with n-electron explicitly correlated Gaussians with shifted centers.

    PubMed

    Stanke, Monika; Palikot, Ewa; Adamowicz, Ludwik

    2016-05-07

    Algorithms for calculating the leading mass-velocity (MV) and Darwin (D) relativistic corrections are derived for electronic wave functions expanded in terms of n-electron explicitly correlated Gaussian functions with shifted centers and without pre-exponential angular factors. The algorithms are implemented and tested in calculations of MV and D corrections for several points on the ground-state potential energy curves of the H2 and LiH molecules. The algorithms are general and can be applied in calculations of systems with an arbitrary number of electrons.

  7. Smooth particle hydrodynamics: importance of correction terms in adaptive resolution algorithms

    NASA Astrophysics Data System (ADS)

    Alimi, J.-M.; Serna, A.; Pastor, C.; Bernabeu, G.

    2003-11-01

    We describe TREEASPH, a new code to evolve self-gravitating fluids, both with and without a collisionless component. In TREEASPH, gravitational forces are computed from a hierarchical tree algorithm (TREEcode), while hydrodynamic properties are computed by using a SPH method that includes the ∇h correction terms appearing when the spatial resolution h(t,r) is not a constant. Another important feature, which considerably increases the code efficiency on sequential and vectorial computers, is that time-stepping is performed from a PEC scheme (Predict-Evaluate-Correct) modified to allow for individual timesteps. Some authors have previously noted that the ∇h correction terms are needed to avoid the introduction on simulations of a non-physical entropy. By using TREEASPH we show here that, in cosmological simulations, this non-physical entropy has a negative sign. As a consequence, when the ∇h terms are neglected, the density peaks associated to shock fronts are overestimated. This in turn results in an overestimated efficiency of star-formation processes.

  8. Correcting the optimal resampling-based error rate by estimating the error rate of wrapper algorithms.

    PubMed

    Bernau, Christoph; Augustin, Thomas; Boulesteix, Anne-Laure

    2013-09-01

    High-dimensional binary classification tasks, for example, the classification of microarray samples into normal and cancer tissues, usually involve a tuning parameter. By reporting the performance of the best tuning parameter value only, over-optimistic prediction errors are obtained. For correcting this tuning bias, we develop a new method which is based on a decomposition of the unconditional error rate involving the tuning procedure, that is, we estimate the error rate of wrapper algorithms as introduced in the context of internal cross-validation (ICV) by Varma and Simon (2006, BMC Bioinformatics 7, 91). Our subsampling-based estimator can be written as a weighted mean of the errors obtained using the different tuning parameter values, and thus can be interpreted as a smooth version of ICV, which is the standard approach for avoiding tuning bias. In contrast to ICV, our method guarantees intuitive bounds for the corrected error. Additionally, we suggest to use bias correction methods also to address the conceptually similar method selection bias that results from the optimal choice of the classification method itself when evaluating several methods successively. We demonstrate the performance of our method on microarray and simulated data and compare it to ICV. This study suggests that our approach yields competitive estimates at a much lower computational price.

  9. Algorithms based on CWT and classifiers to control cardiac alterations and stress using an ECG and a SCR.

    PubMed

    Villarejo, María Viqueira; Zapirain, Begoña García; Zorrilla, Amaia Méndez

    2013-05-10

    This paper presents the results of using a commercial pulsimeter as an electrocardiogram (ECG) for wireless detection of cardiac alterations and stress levels for home control. For these purposes, signal processing techniques (Continuous Wavelet Transform (CWT) and J48) have been used, respectively. The designed algorithm analyses the ECG signal and is able to detect the heart rate (99.42%), arrhythmia (93.48%) and extrasystoles (99.29%). The detection of stress level is complemented with Skin Conductance Response (SCR), whose success is 94.02%. The heart rate variability does not show added value to the stress detection in this case. With this pulsimeter, it is possible to prevent and detect anomalies for a non-intrusive way associated to a telemedicine system. It is also possible to use it during physical activity due to the fact the CWT minimizes the motion artifacts.

  10. Algorithms Based on CWT and Classifiers to Control Cardiac Alterations and Stress Using an ECG and a SCR

    PubMed Central

    Villarejo, María Viqueira; Zapirain, Begoña García; Zorrilla, Amaia Méndez

    2013-01-01

    This paper presents the results of using a commercial pulsimeter as an electrocardiogram (ECG) for wireless detection of cardiac alterations and stress levels for home control. For these purposes, signal processing techniques (Continuous Wavelet Transform (CWT) and J48) have been used, respectively. The designed algorithm analyses the ECG signal and is able to detect the heart rate (99.42%), arrhythmia (93.48%) and extrasystoles (99.29%). The detection of stress level is complemented with Skin Conductance Response (SCR), whose success is 94.02%. The heart rate variability does not show added value to the stress detection in this case. With this pulsimeter, it is possible to prevent and detect anomalies for a non-intrusive way associated to a telemedicine system. It is also possible to use it during physical activity due to the fact the CWT minimizes the motion artifacts. PMID:23666135

  11. Validation of The Standard Aerosol Models Used In The Atmospheric Correction Algorithms For Satellite Ocean Observation

    NASA Astrophysics Data System (ADS)

    Martiny, N.; Santer, R.

    Over ocean, the total radiance measured by the satellite sensors at the top of the atmo- sphere is mainly atmospheric. In order to access to the water leaving radiance, directly related to the concentration of the different components of the water, we need to cor- rect the satellite measurements from the important atmospheric contribution. In the atmosphere, the light emitted by the sun is scattered by the molecules, absorbed by the gases, and both scattered and absorbed in unknown proportions by the aerosols, particles confined in the first layer of the atmosphere due to their large size. The remote sensing of the aerosols represents then a complex step in the atmospheric correction scheme. Over ocean, the principle of the aerosol remote sensing lies on the assump- tion that the water is absorbent in the red and the near-infrared. The aerosol model is then deduced from these spectral bands and used to extrapolate the aerosol optical properties in the visible wavelengths. For ocean color sensors such as CZCS, OCTS, POLDER, SeaWiFS or MODIS, the atmospheric correction algorithms use standard aerosol models defined by Shettle &Fenn for their look-up-tables. Over coastal wa- ters, are these models still suitable? The goal of this work is to validate the standard aerosol models used in the atmospheric correction algorithms over coastal zones. For this work, we use ground-based in-situ measurements from the CIMEL sunphotome- ter instrument. Using the extinction measurements, we can deduce the aerosol spectral dependency which falls between the spectral dependency of two standard Shettle &Fenn aerosol models. After the interpolation of the aerosol model, we can use it to extrapolate in the visible the optical parameters needed for the atmospheric correction scheme: Latm, the atmospheric radiance and T, the atmospheric transmittance. The simulations are done using a radiative transfer code based on the successive order of scattering. Latm and T are then used for

  12. A procedure for testing the quality of LANDSAT atmospheric correction algorithms

    NASA Technical Reports Server (NTRS)

    Dias, L. A. V. (Principal Investigator); Vijaykumar, N. L.; Neto, G. C.

    1982-01-01

    There are two basic methods for testing the quality of an algorithm to minimize atmospheric effects on LANDSAT imagery: (1) test the results a posteriori, using ground truth or control points; (2) use a method based on image data plus estimation of additional ground and/or atmospheric parameters. A procedure based on the second method is described. In order to select the parameters, initially the image contrast is examined for a series of parameter combinations. The contrast improves for better corrections. In addition the correlation coefficient between two subimages, taken at different times, of the same scene is used for parameter's selection. The regions to be correlated should not have changed considerably in time. A few examples using this proposed procedure are presented.

  13. A Local Corrections Algorithm for Solving Poisson's Equation inThree Dimensions

    SciTech Connect

    McCorquodale, Peter; Colella, Phillip; Balls, Gregory T.; Baden,Scott B.

    2006-10-30

    We present a second-order accurate algorithm for solving thefree-space Poisson's equation on a locally-refined nested grid hierarchyin three dimensions. Our approach is based on linear superposition oflocal convolutions of localized charge distributions, with the nonlocalcoupling represented on coarser grids. There presentation of the nonlocalcoupling on the local solutions is based on Anderson's Method of LocalCorrections and does not require iteration between different resolutions.A distributed-memory parallel implementation of this method is observedto have a computational cost per grid point less than three times that ofa standard FFT-based method on a uniform grid of the same resolution, andscales well up to 1024 processors.

  14. Algorithms for computing the time-corrected instantaneous frequency (reassigned) spectrogram, with applications.

    PubMed

    Fulop, Sean A; Fitz, Kelly

    2006-01-01

    A modification of the spectrogram (log magnitude of the short-time Fourier transform) to more accurately show the instantaneous frequencies of signal components was first proposed in 1976 [Kodera et al., Phys. Earth Planet. Inter. 12, 142-150 (1976)], and has been considered or reinvented a few times since but never widely adopted. This paper presents a unified theoretical picture of this time-frequency analysis method, the time-corrected instantaneous frequency spectrogram, together with detailed implementable algorithms comparing three published techniques for its computation. The new representation is evaluated against the conventional spectrogram for its superior ability to track signal components. The lack of a uniform framework for either mathematics or implementation details which has characterized the disparate literature on the schemes has been remedied here. Fruitful application of the method is shown in the realms of speech phonation analysis, whale song pitch tracking, and additive sound modeling.

  15. Error correction algorithm for high accuracy bio-impedance measurement in wearable healthcare applications.

    PubMed

    Kubendran, Rajkumar; Lee, Seulki; Mitra, Srinjoy; Yazicioglu, Refet Firat

    2014-04-01

    Implantable and ambulatory measurement of physiological signals such as Bio-impedance using miniature biomedical devices needs careful tradeoff between limited power budget, measurement accuracy and complexity of implementation. This paper addresses this tradeoff through an extensive analysis of different stimulation and demodulation techniques for accurate Bio-impedance measurement. Three cases are considered for rigorous analysis of a generic impedance model, with multiple poles, which is stimulated using a square/sinusoidal current and demodulated using square/sinusoidal clock. For each case, the error in determining pole parameters (resistance and capacitance) is derived and compared. An error correction algorithm is proposed for square wave demodulation which reduces the peak estimation error from 9.3% to 1.3% for a simple tissue model. Simulation results in Matlab using ideal RC values show an average accuracy of for single pole and for two pole RC networks. Measurements using ideal components for a single pole model gives an overall and readings from saline phantom solution (primarily resistive) gives an . A Figure of Merit is derived based on ability to accurately resolve multiple poles in unknown impedance with minimal measurement points per decade, for given frequency range and supply current budget. This analysis is used to arrive at an optimal tradeoff between accuracy and power. Results indicate that the algorithm is generic and can be used for any application that involves resolving poles of an unknown impedance. It can be implemented as a post-processing technique for error correction or even incorporated into wearable signal monitoring ICs.

  16. A method of generalized projections (MGP) ghost correction algorithm for interleaved EPI.

    PubMed

    Lee, K J; Papadakis, N G; Barber, D C; Wilkinson, I D; Griffiths, P D; Paley, M N J

    2004-07-01

    Investigations into the method of generalized projections (MGP) as a ghost correction method for interleaved EPI are described. The technique is image-based and does not require additional reference scans. The algorithm was found to be more effective if a priori knowledge was incorporated to reduce the degrees of freedom, by modeling the ghosting as arising from a small number of phase offsets. In simulations with phase variation between consecutive shots for n-interleaved echo planar imaging (EPI), ghost reduction was achieved for n = 2 only. With no phase variation between shots, ghost reduction was obtained with n up to 16. Incorporating a relaxation parameter was found to improve convergence. Dependence of convergence on the region of support was also investigated. A fully automatic version of the method was developed, using results from the simulations. When tested on in vivo 2-, 16-, and 32-interleaved spin-echo EPI data, the method achieved deghosting and image restoration close to that obtained by both reference scan and odd/even filter correction, although some residual artifacts remained.

  17. Intensity Inhomogeneity Correction of Structural MR Images: A Data-Driven Approach to Define Input Algorithm Parameters

    PubMed Central

    Ganzetti, Marco; Wenderoth, Nicole; Mantini, Dante

    2016-01-01

    Intensity non-uniformity (INU) in magnetic resonance (MR) imaging is a major issue when conducting analyses of brain structural properties. An inaccurate INU correction may result in qualitative and quantitative misinterpretations. Several INU correction methods exist, whose performance largely depend on the specific parameter settings that need to be chosen by the user. Here we addressed the question of how to select the best input parameters for a specific INU correction algorithm. Our investigation was based on the INU correction algorithm implemented in SPM, but this can be in principle extended to any other algorithm requiring the selection of input parameters. We conducted a comprehensive comparison of indirect metrics for the assessment of INU correction performance, namely the coefficient of variation of white matter (CVWM), the coefficient of variation of gray matter (CVGM), and the coefficient of joint variation between white matter and gray matter (CJV). Using simulated MR data, we observed the CJV to be more accurate than CVWM and CVGM, provided that the noise level in the INU-corrected image was controlled by means of spatial smoothing. Based on the CJV, we developed a data-driven approach for selecting INU correction parameters, which could effectively work on actual MR images. To this end, we implemented an enhanced procedure for the definition of white and gray matter masks, based on which the CJV was calculated. Our approach was validated using actual T1-weighted images collected with 1.5 T, 3 T, and 7 T MR scanners. We found that our procedure can reliably assist the selection of valid INU correction algorithm parameters, thereby contributing to an enhanced inhomogeneity correction in MR images. PMID:27014050

  18. Accuracy of inhomogeneity correction algorithm in intensity-modulated radiotherapy of head-and-neck tumors

    SciTech Connect

    Yoon, Myonggeun; Lee, Doo-Hyun; Shin, Dongho; Lee, Se Byeong; Park, Sung Yong . E-mail: cool_park@ncc.re.kr; Cho, Kwan Ho

    2007-04-01

    We examined the degree of calculated-to-measured dose difference for nasopharyngeal target volume in intensity-modulated radiotherapy (IMRT) based on the observed/expected ratio using patient anatomy with humanoid head-and-neck phantom. The plans were designed with a clinical treatment planning system that uses a measurement-based pencil beam dose-calculation algorithm. Two kinds of IMRT plans, which give a direct indication of the error introduced in routine treatment planning, were categorized and evaluated. The experimental results show that when the beams pass through the oral cavity in anthropomorphic head-and-neck phantom, the average dose difference becomes significant, revealing about 10% dose difference to prescribed dose at isocenter. To investigate both the physical reasons of the dose discrepancy and the inhomogeneity effect, we performed the 10 cases of IMRT quality assurance (QA) with plastic and humanoid phantoms. Our result suggests that the transient electronic disequilibrium with the increased lateral electron range may cause the inaccuracy of dose calculation algorithm, and the effectiveness of the inhomogeneity corrections used in IMRT plans should be evaluated to ensure meaningful quality assurance and delivery.

  19. Corrections.

    PubMed

    2015-07-01

    Lai Y-S, Biedermann P, Ekpo UF, et al. Spatial distribution of schistosomiasis and treatment needs in sub-Saharan Africa: a systematic review and geostatistical analysis. Lancet Infect Dis 2015; published online May 22. http://dx.doi.org/10.1016/S1473-3099(15)00066-3—Figure 1 of this Article should have contained a box stating ‘100 references added’ with an arrow pointing inwards, rather than a box stating ‘199 records excluded’, and an asterisk should have been added after ‘1473 records extracted into GNTD’. Additionally, the positioning of the ‘§ and ‘†’ footnotes has been corrected in table 1. These corrections have been made to the online version as of June 4, 2015.

  20. Correction.

    PubMed

    2016-02-01

    In the article by Guessous et al (Guessous I, Pruijm M, Ponte B, Ackermann D, Ehret G, Ansermot N, Vuistiner P, Staessen J, Gu Y, Paccaud F, Mohaupt M, Vogt B, Pechère-Bertschi A, Martin PY, Burnier M, Eap CB, Bochud M. Associations of ambulatory blood pressure with urinary caffeine and caffeine metabolite excretions. Hypertension. 2015;65:691–696. doi: 10.1161/HYPERTENSIONAHA.114.04512), which published online ahead of print December 8, 2014, and appeared in the March 2015 issue of the journal, a correction was needed.One of the author surnames was misspelled. Antoinette Pechère-Berstchi has been corrected to read Antoinette Pechère-Bertschi.The authors apologize for this error.

  1. Feature Selection and Effective Classifiers.

    ERIC Educational Resources Information Center

    Deogun, Jitender S.; Choubey, Suresh K.; Raghavan, Vijay V.; Sever, Hayri

    1998-01-01

    Develops and analyzes four algorithms for feature selection in the context of rough set methodology. Experimental results confirm the expected relationship between the time complexity of these algorithms and the classification accuracy of the resulting upper classifiers. When compared, results of upper classifiers perform better than lower…

  2. Algorithms for calculating the leading quantum electrodynamics P(1/r 3) correction with all-electron molecular explicitly correlated Gaussians

    NASA Astrophysics Data System (ADS)

    Stanke, Monika; Jurkowski, Jacek; Adamowicz, Ludwik

    2017-03-01

    Algorithms for calculating the quantum electrodynamics Araki–Sucher correction for n-electron explicitly correlated molecular Gaussian functions with shifted centers are derived and implemented. The algorithms are tested in calculations concerning the H2 molecule and applied in ground-state calculations of LiH and {{{H}}}3+ molecules. The implementation will significantly increase the accuracy of the calculations of potential energy surfaces of small diatomic and triatomic molecules and their rovibrational spectra.

  3. Correction

    NASA Astrophysics Data System (ADS)

    1998-12-01

    Alleged mosasaur bite marks on Late Cretaceous ammonites are limpet (patellogastropod) home scars Geology, v. 26, p. 947 950 (October 1998) This article had the following printing errors: p. 947, Abstract, line 11, “sepia” should be “septa” p. 947, 1st paragraph under Introduction, line 2, “creep” should be “deep” p. 948, column 1, 2nd paragraph, line 7, “creep” should be “deep” p. 949, column 1, 1st paragraph, line 1, “creep” should be “deep” p. 949, column 1, 1st paragraph, line 5, “19774” should be “1977)” p. 949, column 1, 4th paragraph, line 7, “in particular” should be “In particular” CORRECTION Mammalian community response to the latest Paleocene thermal maximum: An isotaphonomic study in the northern Bighorn Basin, Wyoming Geology, v. 26, p. 1011 1014 (November 1998) An error appeared in the References Cited. The correct reference appears below: Fricke, H. C., Clyde, W. C., O'Neil, J. R., and Gingerich, P. D., 1998, Evidence for rapid climate change in North America during the latest Paleocene thermal maximum: Oxygen isotope compositions of biogenic phosphate from the Bighorn Basin (Wyoming): Earth and Planetary Science Letters, v. 160, p. 193 208.

  4. Denoising Algorithm for the Pixel-Response Non-Uniformity Correction of a Scientific CMOS Under Low Light Conditions

    NASA Astrophysics Data System (ADS)

    Hu, Changmiao; Bai, Yang; Tang, Ping

    2016-06-01

    We present a denoising algorithm for the pixel-response non-uniformity correction of a scientific complementary metal-oxide-semiconductor (CMOS) image sensor, which captures images under extremely low-light conditions. By analyzing the integrating sphere experimental data, we present a pixel-by-pixel flat-field denoising algorithm to remove this fixed pattern noise, which occur in low-light conditions and high pixel response readouts. The response of the CMOS image sensor imaging system to the uniform radiance field shows a high level of spatial uniformity after the denoising algorithm has been applied.

  5. Classifying Microorganisms.

    ERIC Educational Resources Information Center

    Baker, William P.; Leyva, Kathryn J.; Lang, Michael; Goodmanis, Ben

    2002-01-01

    Focuses on an activity in which students sample air at school and generate ideas about how to classify the microorganisms they observe. The results are used to compare air quality among schools via the Internet. Supports the development of scientific inquiry and technology skills. (DDR)

  6. Particle classifier

    SciTech Connect

    Etkin, B.

    1987-04-14

    This patent describes a classifier for particulate material comprising a housing having an inlet to receive a classifying air flow flowing in a given direction, collection means downstream of the inlet to receive material classified by the air flow, and material introduction means intermediate the inlet and the collection means to introduce particles entrained in a secondary air stream into the housing in a direction other than the given direction. The material introduction means includes a material outlet aperture in a wall of the housing extending generally perpendicular to the given direction, conveying means to convey material and the secondary air stream to the material outlet and diverting means to divert the secondary air stream to a direction generally parallel to the classifying air flow flowing in the given direction. The diverting means includes a surface extending downstream from the outlet and adjacent thereto and being dimensioned to divert the secondary airstream by a Coanda effect generally parallel to the given direction and thereby segregate the secondary air/stream from the particles and permit continued movement of the particles along predictable trajectories.

  7. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems.

    PubMed

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality.

  8. An Efficient Correction Algorithm for Eliminating Image Misalignment Effects on Co-Phasing Measurement Accuracy for Segmented Active Optics Systems

    PubMed Central

    Yue, Dan; Xu, Shuyan; Nie, Haitao; Wang, Zongyang

    2016-01-01

    The misalignment between recorded in-focus and out-of-focus images using the Phase Diversity (PD) algorithm leads to a dramatic decline in wavefront detection accuracy and image recovery quality for segmented active optics systems. This paper demonstrates the theoretical relationship between the image misalignment and tip-tilt terms in Zernike polynomials of the wavefront phase for the first time, and an efficient two-step alignment correction algorithm is proposed to eliminate these misalignment effects. This algorithm processes a spatial 2-D cross-correlation of the misaligned images, revising the offset to 1 or 2 pixels and narrowing the search range for alignment. Then, it eliminates the need for subpixel fine alignment to achieve adaptive correction by adding additional tip-tilt terms to the Optical Transfer Function (OTF) of the out-of-focus channel. The experimental results demonstrate the feasibility and validity of the proposed correction algorithm to improve the measurement accuracy during the co-phasing of segmented mirrors. With this alignment correction, the reconstructed wavefront is more accurate, and the recovered image is of higher quality. PMID:26934045

  9. A Robust In-Situ Warp-Correction Algorithm For VISAR Streak Camera Data at the National Ignition Facility

    SciTech Connect

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.

    2015-01-12

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high-energy-density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time, affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  10. A robust in-situ warp-correction algorithm for VISAR streak camera data at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Labaria, George R.; Warrick, Abbie L.; Celliers, Peter M.; Kalantar, Daniel H.

    2015-02-01

    The National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory is a 192-beam pulsed laser system for high energy density physics experiments. Sophisticated diagnostics have been designed around key performance metrics to achieve ignition. The Velocity Interferometer System for Any Reflector (VISAR) is the primary diagnostic for measuring the timing of shocks induced into an ignition capsule. The VISAR system utilizes three streak cameras; these streak cameras are inherently nonlinear and require warp corrections to remove these nonlinear effects. A detailed calibration procedure has been developed with National Security Technologies (NSTec) and applied to the camera correction analysis in production. However, the camera nonlinearities drift over time affecting the performance of this method. An in-situ fiber array is used to inject a comb of pulses to generate a calibration correction in order to meet the timing accuracy requirements of VISAR. We develop a robust algorithm for the analysis of the comb calibration images to generate the warp correction that is then applied to the data images. Our algorithm utilizes the method of thin-plate splines (TPS) to model the complex nonlinear distortions in the streak camera data. In this paper, we focus on the theory and implementation of the TPS warp-correction algorithm for the use in a production environment.

  11. Algorithms and applications of aberration correction and American standard-based digital evaluation in surface defects evaluating system

    NASA Astrophysics Data System (ADS)

    Wu, Fan; Cao, Pin; Yang, Yongying; Li, Chen; Chai, Huiting; Zhang, Yihui; Xiong, Haoliang; Xu, Wenlin; Yan, Kai; Zhou, Lin; Liu, Dong; Bai, Jian; Shen, Yibing

    2016-11-01

    The inspection of surface defects is one of significant sections of optical surface quality evaluation. Based on microscopic scattering dark-field imaging, sub-aperture scanning and stitching, the Surface Defects Evaluating System (SDES) can acquire full-aperture image of defects on optical elements surface and then extract geometric size and position information of defects with image processing such as feature recognization. However, optical distortion existing in the SDES badly affects the inspection precision of surface defects. In this paper, a distortion correction algorithm based on standard lattice pattern is proposed. Feature extraction, polynomial fitting and bilinear interpolation techniques in combination with adjacent sub-aperture stitching are employed to correct the optical distortion of the SDES automatically in high accuracy. Subsequently, in order to digitally evaluate surface defects with American standard by using American military standards MIL-PRF-13830B to judge the surface defects information obtained from the SDES, an American standard-based digital evaluation algorithm is proposed, which mainly includes a judgment method of surface defects concentration. The judgment method establishes weight region for each defect and adopts the method of overlap of weight region to calculate defects concentration. This algorithm takes full advantage of convenience of matrix operations and has merits of low complexity and fast in running, which makes itself suitable very well for highefficiency inspection of surface defects. Finally, various experiments are conducted and the correctness of these algorithms are verified. At present, these algorithms have been used in SDES.

  12. Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the ‘Extreme Learning Machine’ Algorithm

    PubMed Central

    McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan

    2015-01-01

    Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687

  13. A kurtosis-based wavelet algorithm for motion artifact correction of fNIRS data

    PubMed Central

    Chiarelli, Antonio M.; Maclin, Edward L.; Fabiani, Monica; Gratton, Gabriele

    2015-01-01

    Movements are a major source of artifacts in functional Near-Infrared Spectroscopy (fNIRS). Several algorithms have been developed for motion artifact correction of fNIRS data, including Principal Component Analysis (PCA), targeted Principal Component Analysis (tPCA), Spline Interpolation (SI), and Wavelet Filtering (WF). WF is based on removing wavelets with coefficients deemed to be outliers based on their standardized scores, and it has proven to be effective on both synthetized and real data. However, when the SNR is high, it can lead to a reduction of signal amplitude. This may occur because standardized scores inherently adapt to the noise level, independently of the shape of the distribution of the wavelet coefficients. Higher-order moments of the wavelet coefficient distribution may provide a more diagnostic index of wavelet distribution abnormality than its variance. Here we introduce a new procedure that relies on eliminating wavelets that contribute to generate a large fourth-moment (i.e., kurtosis) of the coefficient distribution to define “outliers” wavelets (kurtosis-based Wavelet Filtering, kbWF). We tested kbWF by comparing it with other existing procedures, using simulated functional hemodynamic responses added to real resting-state fNIRS recordings. These simulations show that kbWF is highly effective in eliminating transient noise, yielding results with higher SNR than other existing methods over a wide range of signal and noise amplitudes. This is because: (1) the procedure is iterative; and (2) kurtosis is more diagnostic than variance in identifying outliers. However, kbWF does not eliminate slow components of artifacts whose duration is comparable to the total recording time. PMID:25747916

  14. Ground based measurements on reflectance towards validating atmospheric correction algorithms on IRS-P6 AWiFS data

    NASA Astrophysics Data System (ADS)

    Rani Sharma, Anu; Kharol, Shailesh Kumar; Kvs, Badarinath; Roy, P. S.

    In Earth observation, the atmosphere has a non-negligible influence on the visible and infrared radiation which is strong enough to modify the reflected electromagnetic signal and at-target reflectance. Scattering of solar irradiance by atmospheric molecules and aerosol generates path radiance, which increases the apparent surface reflectance over dark surfaces while absorption by aerosols and other molecules in the atmosphere causes loss of brightness to the scene, as recorded by the satellite sensor. In order to derive precise surface reflectance from satellite image data, it is indispensable to apply the atmospheric correction which serves to remove the effects of molecular and aerosol scattering. In the present study, we have implemented a fast atmospheric correction algorithm to IRS-P6 AWiFS satellite data which can effectively retrieve surface reflectance under different atmospheric and surface conditions. The algorithm is based on MODIS climatology products and simplified use of Second Simulation of Satellite Signal in Solar Spectrum (6S) radiative transfer code, which is used to generate look-up-tables (LUTs). The algorithm requires information on aerosol optical depth for correcting the satellite dataset. The proposed method is simple and easy to implement for estimating surface reflectance from the at sensor recorded signal, on a per pixel basis. The atmospheric correction algorithm has been tested for different IRS-P6 AWiFS False color composites (FCC) covering the ICRISAT Farm, Patancheru, Hyderabad, India under varying atmospheric conditions. Ground measurements of surface reflectance representing different land use/land cover, i.e., Red soil, Chick Pea crop, Groundnut crop and Pigeon Pea crop were conducted to validate the algorithm and found a very good match between surface reflectance and atmospherically corrected reflectance for all spectral bands. Further, we aggregated all datasets together and compared the retrieved AWiFS reflectance with

  15. Stack emission monitoring using non-dispersive infrared with optimized nonlinear absorption cross-interference correction algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Y.-W.; Liu, C.; Chan, K.-L.; Xie, P.-H.; Liu, W.-Q.; Zeng, Y.; Wang, S.-M.; Huang, S.-H.; Chen, J.; Wang, Y.-P.; Si, F.-Q.

    2013-02-01

    In this paper, we present an optimized analysis algorithm for non-dispersive infrared (NDIR) to monitor stack emissions. The newly developed analysis algorithm simultaneously compensates for nonlinear absorption and cross-interference between different gases. We present a mathematical derivation for the measurement error caused by variations in interference coefficients when nonlinear absorption occurs. The optimized algorithm is derived from a classical one and uses interference functions to quantify cross-interference. The interference functions vary proportionally with the nonlinear absorption. Thus, interference coefficients among different gases can be modeled by the interference functions whether gases are characterized by linear or nonlinear absorption. In this study, the simultaneous analysis of two components (CO2 and CO) serves as an example for the validation of the optimized algorithm. The interference functions in this case can be obtained by least-squares fitting with three-order polynomials. Experiments show that the results of cross-interference correction are improved significantly by utilizing fitted interference functions when nonlinear absorptions occur. The dynamic measurement ranges of CO2 and CO are improved by about a factor of 1.8 and 3.5, respectively. A commercial NDIR multi-gas analyzer with high accuracy was used to validate the CO and CO2 measurements derived from the NDIR analyzer prototype in which the new cross-interference correction algorithm was embedded. Both measurements well agreed.

  16. Experimental Validation of Advanced Dispersed Fringe Sensing (ADFS) Algorithm Using Advanced Wavefront Sensing and Correction Testbed (AWCT)

    NASA Technical Reports Server (NTRS)

    Wang, Xu; Shi, Fang; Sigrist, Norbert; Seo, Byoung-Joon; Tang, Hong; Bikkannavar, Siddarayappa; Basinger, Scott; Lay, Oliver

    2012-01-01

    Large aperture telescope commonly features segment mirrors and a coarse phasing step is needed to bring these individual segments into the fine phasing capture range. Dispersed Fringe Sensing (DFS) is a powerful coarse phasing technique and its alteration is currently being used for JWST.An Advanced Dispersed Fringe Sensing (ADFS) algorithm is recently developed to improve the performance and robustness of previous DFS algorithms with better accuracy and unique solution. The first part of the paper introduces the basic ideas and the essential features of the ADFS algorithm and presents the some algorithm sensitivity study results. The second part of the paper describes the full details of algorithm validation process through the advanced wavefront sensing and correction testbed (AWCT): first, the optimization of the DFS hardware of AWCT to ensure the data accuracy and reliability is illustrated. Then, a few carefully designed algorithm validation experiments are implemented, and the corresponding data analysis results are shown. Finally the fiducial calibration using Range-Gate-Metrology technique is carried out and a <10nm or <1% algorithm accuracy is demonstrated.

  17. Correction algorithm for online continuous flow δ13C and δ18O carbonate and cellulose stable isotope analyses

    NASA Astrophysics Data System (ADS)

    Evans, M. N.; Selmer, K. J.; Breeden, B. T.; Lopatka, A. S.; Plummer, R. E.

    2016-09-01

    We describe an algorithm to correct for scale compression, runtime drift, and amplitude effects in carbonate and cellulose oxygen and carbon isotopic analyses made on two online continuous flow isotope ratio mass spectrometry (CF-IRMS) systems using gas chromatographic (GC) separation. We validate the algorithm by correcting measurements of samples of known isotopic composition which are not used to estimate the corrections. For carbonate δ13C (δ18O) data, median precision of validation estimates for two reference materials and two calibrated working standards is 0.05‰ (0.07‰); median bias is 0.04‰ (0.02‰) over a range of 49.2‰ (24.3‰). For α-cellulose δ13C (δ18O) data, median precision of validation estimates for one reference material and five working standards is 0.11‰ (0.27‰); median bias is 0.13‰ (-0.10‰) over a range of 16.1‰ (19.1‰). These results are within the 5th-95th percentile range of subsequent routine runtime validation exercises in which one working standard is used to calibrate the other. Analysis of the relative importance of correction steps suggests that drift and scale-compression corrections are most reliable and valuable. If validation precisions are not already small, routine cross-validated precision estimates are improved by up to 50% (80%). The results suggest that correction for systematic error may enable these particular CF-IRMS systems to produce δ13C and δ18O carbonate and cellulose isotopic analyses with higher validated precision, accuracy, and throughput than is typically reported for these systems. The correction scheme may be used in support of replication-intensive research projects in paleoclimatology and other data-intensive applications within the geosciences.

  18. A parallel algorithm for error correction in high-throughput short-read data on CUDA-enabled graphics hardware.

    PubMed

    Shi, Haixiang; Schmidt, Bertil; Liu, Weiguo; Müller-Wittig, Wolfgang

    2010-04-01

    Emerging DNA sequencing technologies open up exciting new opportunities for genome sequencing by generating read data with a massive throughput. However, produced reads are significantly shorter and more error-prone compared to the traditional Sanger shotgun sequencing method. This poses challenges for de novo DNA fragment assembly algorithms in terms of both accuracy (to deal with short, error-prone reads) and scalability (to deal with very large input data sets). In this article, we present a scalable parallel algorithm for correcting sequencing errors in high-throughput short-read data so that error-free reads can be available before DNA fragment assembly, which is of high importance to many graph-based short-read assembly tools. The algorithm is based on spectral alignment and uses the Compute Unified Device Architecture (CUDA) programming model. To gain efficiency we are taking advantage of the CUDA texture memory using a space-efficient Bloom filter data structure for spectrum membership queries. We have tested the runtime and accuracy of our algorithm using real and simulated Illumina data for different read lengths, error rates, input sizes, and algorithmic parameters. Using a CUDA-enabled mass-produced GPU (available for less than US$400 at any local computer outlet), this results in speedups of 12-84 times for the parallelized error correction, and speedups of 3-63 times for both sequential preprocessing and parallelized error correction compared to the publicly available Euler-SR program. Our implementation is freely available for download from http://cuda-ec.sourceforge.net .

  19. A correction scheme for a simplified analytical random walk model algorithm of proton dose calculation in distal Bragg peak regions

    NASA Astrophysics Data System (ADS)

    Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.

    2016-10-01

    The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.

  20. A correction algorithm to simultaneously control dual deformable mirrors in a woofer-tweeter adaptive optics system

    PubMed Central

    Li, Chaohong; Sredar, Nripun; Ivers, Kevin M.; Queener, Hope; Porter, Jason

    2010-01-01

    We present a direct slope-based correction algorithm to simultaneously control two deformable mirrors (DMs) in a woofer-tweeter adaptive optics system. A global response matrix was derived from the response matrices of each deformable mirror and the voltages for both deformable mirrors were calculated simultaneously. This control algorithm was tested and compared with a 2-step sequential control method in five normal human eyes using an adaptive optics scanning laser ophthalmoscope. The mean residual total root-mean-square (RMS) wavefront errors across subjects after adaptive optics (AO) correction were 0.128 ± 0.025 μm and 0.107 ± 0.033 μm for simultaneous and 2-step control, respectively (7.75-mm pupil). The mean intensity of reflectance images acquired after AO convergence was slightly higher for 2-step control. Radially-averaged power spectra calculated from registered reflectance images were nearly identical for all subjects using simultaneous or 2-step control. The correction performance of our new simultaneous dual DM control algorithm is comparable to 2-step control, but is more efficient. This method can be applied to any woofer-tweeter AO system. PMID:20721058

  1. Implementation of the near-field signal redundancy phase-aberration correction algorithm on two-dimensional arrays.

    PubMed

    Li, Yue; Robinson, Brent

    2007-01-01

    Near-field signal-redundancy (NFSR) algorithms for phase-aberration correction have been proposed and experimentally tested for linear and phased one-dimensional arrays. In this paper the performance of an all-row-plus-two-column, two-dimensional algorithm has been analyzed and tested with simulated data sets. This algorithm applies the NFSR algorithm for one-dimensional arrays to all the rows as well as the first and last columns of the array. The results from the two column measurements are used to derive a linear term for each row measurement result. These linear terms then are incorporated into the row results to obtain a two-dimensional phase aberration profile. The ambiguity phase aberration profile, which is the difference between the true and the derived phase aberration profiles, of this algorithm is not linear. Two methods, a trial-and-error method and a diagonal-measurement method, are proposed to linearize the ambiguity profile. The performance of these algorithms is analyzed and tested with simulated data sets.

  2. Atmospheric Correction, Vicarious Calibration and Development of Algorithms for Quantifying Cyanobacteria Blooms from Oceansat-1 OCM Satellite Data

    NASA Astrophysics Data System (ADS)

    Dash, P.; Walker, N. D.; Mishra, D. R.; Hu, C.; D'Sa, E. J.; Pinckney, J. L.

    2011-12-01

    Cyanobacteria represent a major harmful algal group in fresh to brackish water environments. Lac des Allemands, a freshwater lake located southwest of New Orleans, Louisiana on the upper end of the Barataria Estuary, provides a natural laboratory for remote characterization of cyanobacteria blooms because of their seasonal occurrence. The Ocean Colour Monitor (OCM) sensor provides radiance measurements similar to SeaWiFS but with higher spatial resolution. However, OCM does not have a standard atmospheric correction procedure, and it is difficult to find a detailed description of the entire atmospheric correction procedure for ocean (or lake) in one place. Atmospheric correction of satellite data over small lakes and estuaries (Case 2 waters) is also challenging due to difficulties in estimation of aerosol scattering accurately in these areas. Therefore, an atmospheric correction procedure was written for processing OCM data, based on the extensive work done for SeaWiFS. Since OCM-retrieved radiances were abnormally low in the blue wavelength region, a vicarious calibration procedure was also developed. Empirical inversion algorithms were developed to convert the OCM remote sensing reflectance (Rrs) at bands centered at 510.6 and 556.4 nm to concentrations of phycocyanin (PC), the primary cyanobacterial pigment. A holistic approach was followed to minimize the influence of other optically active constituents on the PC algorithm. Similarly, empirical algorithms to estimate chlorophyll a (Chl a) concentrations were developed using OCM bands centered at 556.4 and 669 nm. The best PC algorithm (R2=0.7450, p<0.0001, n=72) yielded a root mean square error (RMSE) of 36.92 μg/L with a relative RMSE of 10.27% (PC from 2.75-363.50 μg/L, n=48). The best algorithm for Chl a (R2=0.7510, p<0.0001, n=72) produced an RMSE of 31.19 μg/L with a relative RMSE of 16.56% (Chl a from 9.46-212.76 μg/L, n=48). While more field data are required to further validate the long

  3. Classifying Human Leg Motions with Uniaxial Piezoelectric Gyroscopes

    PubMed Central

    Tunçel, Orkun; Altun, Kerem; Barshan, Billur

    2009-01-01

    This paper provides a comparative study on the different techniques of classifying human leg motions that are performed using two low-cost uniaxial piezoelectric gyroscopes worn on the leg. A number of feature sets, extracted from the raw inertial sensor data in different ways, are used in the classification process. The classification techniques implemented and compared in this study are: Bayesian decision making (BDM), a rule-based algorithm (RBA) or decision tree, least-squares method (LSM), k-nearest neighbor algorithm (k-NN), dynamic time warping (DTW), support vector machines (SVM), and artificial neural networks (ANN). A performance comparison of these classification techniques is provided in terms of their correct differentiation rates, confusion matrices, computational cost, and training and storage requirements. Three different cross-validation techniques are employed to validate the classifiers. The results indicate that BDM, in general, results in the highest correct classification rate with relatively small computational cost. PMID:22291521

  4. Classifying human leg motions with uniaxial piezoelectric gyroscopes.

    PubMed

    Tunçel, Orkun; Altun, Kerem; Barshan, Billur

    2009-01-01

    This paper provides a comparative study on the different techniques of classifying human leg motions that are performed using two low-cost uniaxial piezoelectric gyroscopes worn on the leg. A number of feature sets, extracted from the raw inertial sensor data in different ways, are used in the classification process. The classification techniques implemented and compared in this study are: Bayesian decision making (BDM), a rule-based algorithm (RBA) or decision tree, least-squares method (LSM), k-nearest neighbor algorithm (k-NN), dynamic time warping (DTW), support vector machines (SVM), and artificial neural networks (ANN). A performance comparison of these classification techniques is provided in terms of their correct differentiation rates, confusion matrices, computational cost, and training and storage requirements. Three different cross-validation techniques are employed to validate the classifiers. The results indicate that BDM, in general, results in the highest correct classification rate with relatively small computational cost.

  5. Spectrum correction algorithm for detectors in airborne radioactivity monitoring equipment NH-UAV based on a ratio processing method

    NASA Astrophysics Data System (ADS)

    Cao, Ye; Tang, Xiao-Bin; Wang, Peng; Meng, Jia; Huang, Xi; Wen, Liang-Sheng; Chen, Da

    2015-10-01

    The unmanned aerial vehicle (UAV) radiation monitoring method plays an important role in nuclear accidents emergency. In this research, a spectrum correction algorithm about the UAV airborne radioactivity monitoring equipment NH-UAV was studied to measure the radioactive nuclides within a small area in real time and in a fixed place. The simulation spectra of the high-purity germanium (HPGe) detector and the lanthanum bromide (LaBr3) detector in the equipment were obtained using the Monte Carlo technique. Spectrum correction coefficients were calculated after performing ratio processing techniques about the net peak areas between the double detectors on the detection spectrum of the LaBr3 detector according to the accuracy of the detection spectrum of the HPGe detector. The relationship between the spectrum correction coefficient and the size of the source term was also investigated. A good linear relation exists between the spectrum correction coefficient and the corresponding energy (R2=0.9765). The maximum relative deviation from the real condition reduced from 1.65 to 0.035. The spectrum correction method was verified as feasible.

  6. Stack emission monitoring using non-dispersive infrared spectroscopy with an optimized nonlinear absorption cross interference correction algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Y. W.; Liu, C.; Chan, K. L.; Xie, P. H.; Liu, W. Q.; Zeng, Y.; Wang, S. M.; Huang, S. H.; Chen, J.; Wang, Y. P.; Si, F. Q.

    2013-08-01

    In this paper, we present an optimized analysis algorithm for non-dispersive infrared (NDIR) to in situ monitor stack emissions. The proposed algorithm simultaneously compensates for nonlinear absorption and cross interference among different gases. We present a mathematical derivation for the measurement error caused by variations in interference coefficients when nonlinear absorption occurs. The proposed algorithm is derived from a classical one and uses interference functions to quantify cross interference. The interference functions vary proportionally with the nonlinear absorption. Thus, interference coefficients among different gases can be modeled by the interference functions whether gases are characterized by linear or nonlinear absorption. In this study, the simultaneous analysis of two components (CO2 and CO) serves as an example for the validation of the proposed algorithm. The interference functions in this case can be obtained by least-squares fitting with third-order polynomials. Experiments show that the results of cross interference correction are improved significantly by utilizing the fitted interference functions when nonlinear absorptions occur. The dynamic measurement ranges of CO2 and CO are improved by about a factor of 1.8 and 3.5, respectively. A commercial analyzer with high accuracy was used to validate the CO and CO2 measurements derived from the NDIR analyzer prototype in which the new algorithm was embedded. The comparison of the two analyzers show that the prototype works well both within the linear and nonlinear ranges.

  7. Proving Correctness of a Controller Algorithm for the RAID Level 5 System

    DTIC Science & Technology

    1998-03-01

    appear in the Proceedings of the International Symposium on Fault-Tolerant Computing, 1998. STAUEN ENT A Distritut1om Un..Ited 19980508 079 This...a first step towards building such a tool, our approach consists of studying several controller algorithms manually, to determine the key properties...a tool, our validity of the controller algorithm obtained. However approach consists of studying several controller algo- the latter task may be

  8. Distributed processing (DP) based e-beam lithography simulation with long range correction algorithm in e-beam machine

    NASA Astrophysics Data System (ADS)

    Ki, Won-Tai; Choi, Ji-Hyeon; Kim, Byung-Gook; Woo, Sang-Gyun; Cho, Han-Ku

    2008-05-01

    As the design rule with wafer process is getting smaller down below 50nm node, the specification of CDs on a mask is getting more tightened. Therefore, more tight and accurate E-Beam Lithography simulation is highly required in these days. However, in reality most of E-Beam simulation cases, there is a trade-off relationship between the accuracy and the simulation speed. Moreover, the necessity of full chip based simulation has been increasing in order to estimate more accurate mask CDs based on real process condition. Therefore, without consideration of long range correction algorithm such as fogging effect and loading effect correction in E-beam machine, it would be impossible and meaningless to pursue the full chip based simulation. In this paper, we introduce a breakthrough method to overcome the obstacles of E-Beam simulation. In-house E-beam simulator, ELIS (E-beam LIthography Simulator), has been upgraded to solve these problems. First, DP (Distributed Processing) strategy was applied to improve calculation speed. Secondly, the long range correction algorithm of E-beam machine was also applied to compute intensity of exposure on a full chip based (Mask). Finally, ELIS-DP has been evaluated possibility of expecting or analyzing CDs on full chip base.

  9. Fast characterization of line-end shortening and application of novel correction algorithms in e-beam direct write

    NASA Astrophysics Data System (ADS)

    Freitag, Martin; Choi, Kang-Hoon; Gutsch, Manuela; Hohle, Christoph; Galler, Reinhard; Krüger, Michael; Weidenmueller, Ulf

    2011-04-01

    For the manufacturing of semiconductor technologies following the ITRS roadmap, we will face the nodes well below 32nm half pitch in the next 2~3 years. Despite being able to achieve the required resolution, which is now possible with electron beam direct write variable shaped beam (EBDW VSB) equipment and resists, it becomes critical to precisely reproduce dense line space patterns onto a wafer. This exposed pattern must meet the targets from the layout in both dimensions (horizontally and vertically). For instance, the end of a line must be printed in its entire length to allow a later placed contact to be able to land on it. Up to now, the control of printed patterns such as line ends is achieved by a proximity effect correction (PEC) which is mostly based on a dose modulation. This investigation of the line end shortening (LES) includes multiple novel approaches, also containing an additional geometrical correction, to push the limits of the available data preparation algorithms and the measurement. The designed LES test patterns, which aim to characterize the status of LES in a quick and easy way, were exposed and measured at Fraunhofer Center Nanoelectronic Technologies (CNT) using its state of the art electron beam direct writer and CD-SEM. Simulation and exposure results with the novel LES correction algorithms applied to the test pattern and a large production like pattern in the range of our target CDs in dense line space features smaller than 40nm will be shown.

  10. Evaluation of Residual Static Corrections by Hybrid Genetic Algorithm Steepest Ascent Autostatics Inversion.Application southern Algerian fields

    NASA Astrophysics Data System (ADS)

    Eladj, Said; bansir, fateh; ouadfeul, sid Ali

    2016-04-01

    The application of genetic algorithm starts with an initial population of chromosomes representing a "model space". Chromosome chains are preferentially Reproduced based on Their fitness Compared to the total population. However, a good chromosome has a Greater opportunity to Produce offspring Compared To other chromosomes in the population. The advantage of the combination HGA / SAA is the use of a global search approach on a large population of local maxima to Improve Significantly the performance of the method. To define the parameters of the Hybrid Genetic Algorithm Steepest Ascent Auto Statics (HGA / SAA) job, we Evaluated by testing in the first stage of "Steepest Ascent," the optimal parameters related to the data used. 1- The number of iterations "Number of hill climbing iteration" is equal to 40 iterations. This parameter defines the participation of the algorithm "SA", in this hybrid approach. 2- The minimum eigenvalue for SA '= 0.8. This is linked to the quality of data and S / N ratio. To find an implementation performance of hybrid genetic algorithms in the inversion for estimating of the residual static corrections, tests Were Performed to determine the number of generation of HGA / SAA. Using the values of residual static corrections already calculated by the Approaches "SAA and CSAA" learning has Proved very effective in the building of the cross-correlation table. To determine the optimal number of generation, we Conducted a series of tests ranging from [10 to 200] generations. The application on real seismic data in southern Algeria allowed us to judge the performance and capacity of the inversion with this hybrid method "HGA / SAA". This experience Clarified the influence of the corrections quality estimated from "SAA / CSAA" and the optimum number of generation hybrid genetic algorithm "HGA" required to have a satisfactory performance. Twenty (20) generations Were enough to Improve continuity and resolution of seismic horizons. This Will allow

  11. ProRaman: a program to classify Raman spectra.

    PubMed

    de Paula, Alderico Rodrigues; Silveira, Landulfo; Pacheco, Marcos Tadeu Tavares

    2009-06-01

    The program ProRaman, developed for the Matlab platform, provides an interactive and flexible graphic interface to develop efficient algorithms to classify Raman spectra into two or three different classes. A set of preprocessing algorithms to decrease the variable dimensionality and to extract the main features which improve the correct classification ratio was implemented. The implemented classification algorithms were based on the Mahalanobis distance and neural network. To verify the functionality of the developed program, 72 spectra from human artery samples, 36 of which had been histopathologically diagnosed as non-diseased and 36 as having an atherosclerotic lesion, were processed using a combination of different preprocessing and classification techniques. The best result was accomplished when the variables were selected from the Raman spectrum shift range from 1200 to 1700 cm(-1), then preprocessed using wavelets for compression and principal component analysis for feature extraction and, finally, classified by a multilayer perceptron with one hidden layer with eight neurons.

  12. A graphics processing unit accelerated motion correction algorithm and modular system for real-time fMRI.

    PubMed

    Scheinost, Dustin; Hampson, Michelle; Qiu, Maolin; Bhawnani, Jitendra; Constable, R Todd; Papademetris, Xenophon

    2013-07-01

    Real-time functional magnetic resonance imaging (rt-fMRI) has recently gained interest as a possible means to facilitate the learning of certain behaviors. However, rt-fMRI is limited by processing speed and available software, and continued development is needed for rt-fMRI to progress further and become feasible for clinical use. In this work, we present an open-source rt-fMRI system for biofeedback powered by a novel Graphics Processing Unit (GPU) accelerated motion correction strategy as part of the BioImage Suite project ( www.bioimagesuite.org ). Our system contributes to the development of rt-fMRI by presenting a motion correction algorithm that provides an estimate of motion with essentially no processing delay as well as a modular rt-fMRI system design. Using empirical data from rt-fMRI scans, we assessed the quality of motion correction in this new system. The present algorithm performed comparably to standard (non real-time) offline methods and outperformed other real-time methods based on zero order interpolation of motion parameters. The modular approach to the rt-fMRI system allows the system to be flexible to the experiment and feedback design, a valuable feature for many applications. We illustrate the flexibility of the system by describing several of our ongoing studies. Our hope is that continuing development of open-source rt-fMRI algorithms and software will make this new technology more accessible and adaptable, and will thereby accelerate its application in the clinical and cognitive neurosciences.

  13. The design of the control algorithm for corrective manufacturing of 5 axis machining centre

    NASA Astrophysics Data System (ADS)

    Beneš, J.; Procháska, F.; Matoušek, O.

    2016-11-01

    The work deals with the creation of correction data when generating spherical and aspherical surfaces. Generation is performed on the converted 5-axis milling machine, for which it is necessary to generate control programs. In the process of generating surfaces may be formed random errors. Hence the need to measure workpieces, and errors corrected. There is thus solved a measurement of generated surface on coordinate measuring machine Mitutoyo LEGEX 744 and draft methods of data processing by using polynomial of nth order. The measured data are processed by Matlab, specifically CFTool module. This method is further tested and subsequently the experiment evaluated.

  14. A distortion correction algorithm for fish-eye panoramic image of master-slave visual surveillance system

    NASA Astrophysics Data System (ADS)

    Zuo, Chenglin; Liu, Yu; Li, Yongle; Xu, Wei; Zhang, Maojun

    2013-09-01

    A master-slave visual surveillance system is composed of one fish-eye panoramic camera and one dynamic pan-tilt-zoom (PTZ) dome camera. In order to make PTZ dome camera zoom into all targets of interest in panoramic image, the fish-eye panoramic camera is fixed inclining towards the gravity direction, which may cause more obvious distortion. This paper proposed a novel method for the distortion correction of captured panoramic image based on the midpoint circle algorithm (MCA). The method uses incremental calculation of decision parameters to determine the pixel positions along a circle circumference, and both of the vertical and horizontal are rectilinearised. Experimental results show that our correction method based on MCA is efficient and effective. In particular, due to its low computational cost, our method can be applied on embedded camera platform without any extra hardware resources.

  15. Depth-resolved analytical model and correction algorithm for photothermal optical coherence tomography

    PubMed Central

    Lapierre-Landry, Maryse; Tucker-Schwartz, Jason M.; Skala, Melissa C.

    2016-01-01

    Photothermal OCT (PT-OCT) is an emerging molecular imaging technique that occupies a spatial imaging regime between microscopy and whole body imaging. PT-OCT would benefit from a theoretical model to optimize imaging parameters and test image processing algorithms. We propose the first analytical PT-OCT model to replicate an experimental A-scan in homogeneous and layered samples. We also propose the PT-CLEAN algorithm to reduce phase-accumulation and shadowing, two artifacts found in PT-OCT images, and demonstrate it on phantoms and in vivo mouse tumors. PMID:27446693

  16. Spatial Fuzzy C Means and Expectation Maximization Algorithms with Bias Correction for Segmentation of MR Brain Images.

    PubMed

    Meena Prakash, R; Shantha Selva Kumari, R

    2017-01-01

    The Fuzzy C Means (FCM) and Expectation Maximization (EM) algorithms are the most prevalent methods for automatic segmentation of MR brain images into three classes Gray Matter (GM), White Matter (WM) and Cerebrospinal Fluid (CSF). The major difficulties associated with these conventional methods for MR brain image segmentation are the Intensity Non-uniformity (INU) and noise. In this paper, EM and FCM with spatial information and bias correction are proposed to overcome these effects. The spatial information is incorporated by convolving the posterior probability during E-Step of the EM algorithm with mean filter. Also, a method of pixel re-labeling is included to improve the segmentation accuracy. The proposed method is validated by extensive experiments on both simulated and real brain images from standard database. Quantitative and qualitative results depict that the method is superior to the conventional methods by around 25% and over the state-of-the art method by 8%.

  17. TOPEX/POSEIDON Microwave Radiometer (TMR): III. Wet Troposphere Range Correction Algorithm and Pre-Launch Error Budget

    NASA Technical Reports Server (NTRS)

    Keihm, S. J.; Janssen, M. A.; Ruf, C. S.

    1993-01-01

    The sole mission function of the TOPEX/POSEIDON Microwave Radiometer (TMR) is to provide corrections for the altimeter range errors induced by the highly variable atmospheric water vapor content. The three TMR frequencies are shown to be near-optimum for measuring the vapor-induced path delay within an environment of variable cloud cover and variable sea surface flux background. After a review of the underlying physics relevant to the prediction of 5-40 GHz nadir-viewing microwave brightness temperatures, we describe the development of the statistical, iterative algorithm used for the TMR retrieval of path delay. Test simulations are presented which demonstrate the uniformity of algorithm performance over a range of cloud liquid and sea surface wind speed conditions...

  18. A novel algorithm for bad pixel detection and correction to improve quality and stability of geometric measurements

    NASA Astrophysics Data System (ADS)

    Celestre, R.; Rosenberger, M.; Notni, G.

    2016-11-01

    An algorithm for detection and individually substitution of bad pixels for further restoration of an image in the presence of such outliers without altering overall image texture is presented. This work presents three phases concerning image processing: bad pixel identification and mapping by means of linear regression and the coefficient of determination of the pixel output as a function of exposure time, local correction of the linear and angular coefficients of the outlier pixels based on their neighbourhood and, finally, image restoration. Simulation and experimental data were used as means of code benchmarking, showing satisfactory results.

  19. An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT

    NASA Astrophysics Data System (ADS)

    Poludniowski, G.; Evans, P. M.; Hansen, V. N.; Webb, S.

    2009-06-01

    A new method is proposed for scatter-correction of cone-beam CT images. A coarse reconstruction is used in initial iteration steps. Modelling of the x-ray tube spectra and detector response are included in the algorithm. Photon diffusion inside the imaging subject is calculated using the Monte Carlo method. Photon scoring at the detector is calculated using forced detection to a fixed set of node points. The scatter profiles are then obtained by linear interpolation. The algorithm is referred to as the coarse reconstruction and fixed detection (CRFD) technique. Scatter predictions are quantitatively validated against a widely used general-purpose Monte Carlo code: BEAMnrc/EGSnrc (NRCC, Canada). Agreement is excellent. The CRFD algorithm was applied to projection data acquired with a Synergy XVI CBCT unit (Elekta Limited, Crawley, UK), using RANDO and Catphan phantoms (The Phantom Laboratory, Salem NY, USA). The algorithm was shown to be effective in removing scatter-induced artefacts from CBCT images, and took as little as 2 min on a desktop PC. Image uniformity was greatly improved as was CT-number accuracy in reconstructions. This latter improvement was less marked where the expected CT-number of a material was very different to the background material in which it was embedded.

  20. A speed of sound aberration correction algorithm for curvilinear ultrasound transducers in ultrasound-based image-guided radiotherapy.

    PubMed

    Fontanarosa, Davide; Pesente, Silvia; Pascoli, Francesco; Ermacora, Denis; Rumeileh, Imad Abu; Verhaegen, Frank

    2013-03-07

    Conventional ultrasound (US) devices use the time of flight (TOF) of reflected US pulses to calculate distances inside the scanned tissues and thus create images. The speed of sound (SOS) is assumed to be constant in all human soft tissues at a generally accepted average value of 1540 m s(-1). This assumption is a source of systematic errors up to several millimeters and of image distortion in quantitative US imaging. In this work, an extension of a method recently published by Fontanarosa et al (2011 Med. Phys. 38 2665-73) is presented: the aim is to correct SOS aberrations in three-dimensional (3D) US images in those cases where a spatially co-registered computerized tomography (CT) scan is also available; the algorithm is then applicable to a more general case where the lines of view (LOV) of the US device are not necessarily parallel and coplanar, thus allowing correction also for US transducers other than linear. The algorithm was applied on a multi-modality pelvic US phantom, scanned through three different liquid layers on top of the phantom with different SOS values; the results show that the correction restores a better match between the CT and the US images, reducing the differences to sub-millimeter agreement. Fifteen clinical cases of prostate cancer patients were also investigated: the SOS corrections of prostate centroids were on average +3.1 mm (max + 4.9 mm-min + 1.3 mm). This is in excellent agreement with reports in the literature on differences between measured prostate positions by US and other techniques, where often the discrepancy was attributed to other causes.

  1. A speed of sound aberration correction algorithm for curvilinear ultrasound transducers in ultrasound-based image-guided radiotherapy

    NASA Astrophysics Data System (ADS)

    Fontanarosa, Davide; Pesente, Silvia; Pascoli, Francesco; Ermacora, Denis; Abu Rumeileh, Imad; Verhaegen, Frank

    2013-03-01

    Conventional ultrasound (US) devices use the time of flight (TOF) of reflected US pulses to calculate distances inside the scanned tissues and thus create images. The speed of sound (SOS) is assumed to be constant in all human soft tissues at a generally accepted average value of 1540 m s-1. This assumption is a source of systematic errors up to several millimeters and of image distortion in quantitative US imaging. In this work, an extension of a method recently published by Fontanarosa et al (2011 Med. Phys. 38 2665-73) is presented: the aim is to correct SOS aberrations in three-dimensional (3D) US images in those cases where a spatially co-registered computerized tomography (CT) scan is also available; the algorithm is then applicable to a more general case where the lines of view (LOV) of the US device are not necessarily parallel and coplanar, thus allowing correction also for US transducers other than linear. The algorithm was applied on a multi-modality pelvic US phantom, scanned through three different liquid layers on top of the phantom with different SOS values; the results show that the correction restores a better match between the CT and the US images, reducing the differences to sub-millimeter agreement. Fifteen clinical cases of prostate cancer patients were also investigated: the SOS corrections of prostate centroids were on average +3.1 mm (max + 4.9 mm-min + 1.3 mm). This is in excellent agreement with reports in the literature on differences between measured prostate positions by US and other techniques, where often the discrepancy was attributed to other causes.

  2. Model and algorithmic framework for detection and correction of cognitive errors.

    PubMed

    Feki, Mohamed Ali; Biswas, Jit; Tolstikov, Andrei

    2009-01-01

    This paper outlines an approach that we are taking for elder-care applications in the smart home, involving cognitive errors and their compensation. Our approach involves high level modeling of daily activities of the elderly by breaking down these activities into smaller units, which can then be automatically recognized at a low level by collections of sensors placed in the homes of the elderly. This separation allows us to employ plan recognition algorithms and systems at a high level, while developing stand-alone activity recognition algorithms and systems at a low level. It also allows the mixing and matching of multi-modality sensors of various kinds that go to support the same high level requirement. Currently our plan recognition algorithms are still at a conceptual stage, whereas a number of low level activity recognition algorithms and systems have been developed. Herein we present our model for plan recognition, providing a brief survey of the background literature. We also present some concrete results that we have achieved for activity recognition, emphasizing how these results are incorporated into the overall plan recognition system.

  3. Natural and Unnatural Oil Layers on the Surface of the Gulf of Mexico Detected and Quantified in Synthetic Aperture RADAR Images with Texture Classifying Neural Network Algorithms

    NASA Astrophysics Data System (ADS)

    MacDonald, I. R.; Garcia-Pineda, O. G.; Morey, S. L.; Huffer, F.

    2011-12-01

    Effervescent hydrocarbons rise naturally from hydrocarbon seeps in the Gulf of Mexico and reach the ocean surface. This oil forms thin (~0.1 μm) layers that enhance specular reflectivity and have been widely used to quantify the abundance and distribution of natural seeps using synthetic aperture radar (SAR). An analogous process occurred at a vastly greater scale for oil and gas discharged from BP's Macondo well blowout. SAR data allow direct comparison of the areas of the ocean surface covered by oil from natural sources and the discharge. We used a texture classifying neural network algorithm to quantify the areas of naturally occurring oil-covered water in 176 SAR image collections from the Gulf of Mexico obtained between May 1997 and November 2007, prior to the blowout. Separately we also analyzed 36 SAR images collections obtained between 26 April and 30 July, 2010 while the discharged oil was visible in the Gulf of Mexico. For the naturally occurring oil, we removed pollution events and transient oceanographic effects by including only the reflectance anomalies that that recurred in the same locality over multiple images. We measured the area of oil layers in a grid of 10x10 km cells covering the entire Gulf of Mexico. Floating oil layers were observed in only a fraction of the total Gulf area amounting to 1.22x10^5 km^2. In a bootstrap sample of 2000 replications, the combined average area of these layers was 7.80x10^2 km^2 (sd 86.03). For a regional comparison, we divided the Gulf of Mexico into four quadrates along 90° W longitude, and 25° N latitude. The NE quadrate, where the BP discharge occurred, received on average 7.0% of the total natural seepage in the Gulf of Mexico (5.24 x10^2 km^2, sd 21.99); the NW quadrate received on average 68.0% of this total (5.30 x10^2 km^2, sd 69.67). The BP blowout occurred in the NE quadrate of the Gulf of Mexico; discharged oil that reached the surface drifted over a large area north of 25° N. Performing a

  4. Quantum decision tree classifier

    NASA Astrophysics Data System (ADS)

    Lu, Songfeng; Braunstein, Samuel L.

    2013-11-01

    We study the quantum version of a decision tree classifier to fill the gap between quantum computation and machine learning. The quantum entropy impurity criterion which is used to determine which node should be split is presented in the paper. By using the quantum fidelity measure between two quantum states, we cluster the training data into subclasses so that the quantum decision tree can manipulate quantum states. We also propose algorithms constructing the quantum decision tree and searching for a target class over the tree for a new quantum object.

  5. Stack filter classifiers

    SciTech Connect

    Porter, Reid B; Hush, Don

    2009-01-01

    Just as linear models generalize the sample mean and weighted average, weighted order statistic models generalize the sample median and weighted median. This analogy can be continued informally to generalized additive modeels in the case of the mean, and Stack Filters in the case of the median. Both of these model classes have been extensively studied for signal and image processing but it is surprising to find that for pattern classification, their treatment has been significantly one sided. Generalized additive models are now a major tool in pattern classification and many different learning algorithms have been developed to fit model parameters to finite data. However Stack Filters remain largely confined to signal and image processing and learning algorithms for classification are yet to be seen. This paper is a step towards Stack Filter Classifiers and it shows that the approach is interesting from both a theoretical and a practical perspective.

  6. Beam hardening correction using iterative total variation (ITV)-based algorithm in CBCT reconstruction

    SciTech Connect

    Seo, Chang-Woo; Cha, Bo Kyung; Jeon, Sungchae; Huh, Young

    2015-07-01

    Recently, beam hardening reduction is required to produce high-quality reconstructions of X-ray cone-beam computed tomography (CBCT) system for medical applications. This paper introduces the iterative total variation (ITV) for filtered-backprojection suffering from the serious beam hardening problems. Feldkamp, Davis, and Kress (FDK) reconstruction algorithm for CBCT system is widely used reconstruction technique. FDK reconstruction algorithm could be realized by generating the weighted projection data, filtering the projection images, and back-projecting the filtered projection data into the volume. However, FDK algorithm suffers from the beam hardening artifacts by X-ray attenuation coefficients. Recently, total variation (TV) method for compressed sensing (CS) has been particularly useful in exploiting the prior knowledge of minimal variation in the X-ray attenuation characteristics across object or human body. But a practical implementation of this method still remains a challenge. The main problem is the iterative nature of solving the TV-based CS formulation, which generally requires multiple iterations of forward and backward projections of a large dataset in clinically or industrially feasible time frame. In this paper, we propose ITV method after FDK reconstruction for reducing the beam hardening artifacts. The beam hardening problems are reduced by the ITV method to promote sparsity inherent in the X-ray attenuation characteristics. (authors)

  7. An efficient algorithm for multiphase image segmentation with intensity bias correction.

    PubMed

    Zhang, Haili; Ye, Xiaojing; Chen, Yunmei

    2013-10-01

    This paper presents a variational model for simultaneous multiphase segmentation and intensity bias estimation for images corrupted by strong noise and intensity inhomogeneity. Since the pixel intensities are not reliable samples for region statistics due to the presence of noise and intensity bias, we use local information based on the joint density within image patches to perform image partition. Hence, the pixel intensity has a multiplicative distribution structure. Then, the maximum-a-posteriori (MAP) principle with those pixel density functions generates the model. To tackle the computational problem of the resultant nonsmooth nonconvex minimization, we relax the constraint on the characteristic functions of partition regions, and apply primal-dual alternating gradient projections to construct a very efficient numerical algorithm. We show that all the variables have closed-form solutions in each iteration, and the computation complexity is very low. In particular, the algorithm involves only regular convolutions and pointwise projections onto the unit ball and canonical simplex. Numerical tests on a variety of images demonstrate that the proposed algorithm is robust, stable, and attains significant improvements on accuracy and efficiency over the state-of-the-arts.

  8. Fast 4D cone-beam reconstruction using the McKinnon-Bates algorithm with truncation correction and nonlinear filtering

    NASA Astrophysics Data System (ADS)

    Zheng, Ziyi; Sun, Mingshan; Pavkovich, John; Star-Lack, Josh

    2011-03-01

    A challenge in using on-board cone beam computed tomography (CBCT) to image lung tumor motion prior to radiation therapy treatment is acquiring and reconstructing high quality 4D images in a sufficiently short time for practical use. For the 1 minute rotation times typical of Linacs, severe view aliasing artifacts, including streaks, are created if a conventional phase-correlated FDK reconstruction is performed. The McKinnon-Bates (MKB) algorithm provides an efficient means of reducing streaks from static tissue but can suffer from low SNR and other artifacts due to data truncation and noise. We have added truncation correction and bilateral nonlinear filtering to the MKB algorithm to reduce streaking and improve image quality. The modified MKB algorithm was implemented on a graphical processing unit (GPU) to maximize efficiency. Results show that a nearly 4x improvement in SNR is obtained compared to the conventional FDK phase-correlated reconstruction and that high quality 4D images with 0.4 second temporal resolution and 1 mm3 isotropic spatial resolution can be reconstructed in less than 20 seconds after data acquisition completes.

  9. Improved MODIS Dark Target aerosol optical depth algorithm over land: angular effect correction

    NASA Astrophysics Data System (ADS)

    Wu, Yerong; de Graaf, Martin; Menenti, Massimo

    2016-11-01

    Aerosol optical depth (AOD) product retrieved from MODerate Resolution Imaging Spectroradiometer (MODIS) measurements has greatly benefited scientific research in climate change and air quality due to its high quality and large coverage over the globe. However, the current product (e.g., Collection 6) over land needs to be further improved. The is because AOD retrieval still suffers large uncertainty from the surface reflectance (e.g., anisotropic reflection) although the impacts of the surface reflectance have been largely reduced using the Dark Target (DT) algorithm. It has been shown that the AOD retrieval over dark surface can be improved by considering surface bidirectional distribution reflectance function (BRDF) effects in previous study. However, the relationship of the surface reflectance between visible and shortwave infrared band that applied in the previous study can lead to an angular dependence of the AOD retrieval. This has at least two reasons. The relationship based on the assumption of isotropic reflection or Lambertian surface is not suitable for the surface bidirectional reflectance factor (BRF). However, although the relationship varies with the surface cover type by considering the vegetation index NDVISWIR, this index itself has a directional effect and affects the estimation of the surface reflection, and it can lead to some errors in the AOD retrieval. To improve this situation, we derived a new relationship for the spectral surface BRF in this study, using 3 years of data from AERONET-based Surface Reflectance Validation Network (ASRVN). To test the performance of the new algorithm, two case studies were used: 2 years of data from North America and 4 months of data from the global land. The results show that the angular effects of the AOD retrieval are largely reduced in most cases, including fewer occurrences of negative retrievals. Particularly, for the global land case, the AOD retrieval was improved by the new algorithm compared to the

  10. Nonmechanical Multizoom Telescope Design Using A Liquid Crystal Spatial Light Modulator and Focus-Correction Algorithm

    DTIC Science & Technology

    2008-03-27

    Captain, USAF AFIT/GEO/ENP/08-03 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base , Ohio...automatically for solar tele- scopes. [7] Guidelines for the algorithm have been clearly defined for over a decade. [20] The process is based on the idea...6m, and 10m. If this were setup on a real bench, then a camera would record a blurry bar patters that changed based on za. Simulating this requires

  11. High Performance Medical Classifiers

    NASA Astrophysics Data System (ADS)

    Fountoukis, S. G.; Bekakos, M. P.

    2009-08-01

    In this paper, parallelism methodologies for the mapping of machine learning algorithms derived rules on both software and hardware are investigated. Feeding the input of these algorithms with patient diseases data, medical diagnostic decision trees and their corresponding rules are outputted. These rules can be mapped on multithreaded object oriented programs and hardware chips. The programs can simulate the working of the chips and can exhibit the inherent parallelism of the chips design. The circuit of a chip can consist of many blocks, which are operating concurrently for various parts of the whole circuit. Threads and inter-thread communication can be used to simulate the blocks of the chips and the combination of block output signals. The chips and the corresponding parallel programs constitute medical classifiers, which can classify new patient instances. Measures taken from the patients can be fed both into chips and parallel programs and can be recognized according to the classification rules incorporated in the chips and the programs design. The chips and the programs constitute medical decision support systems and can be incorporated into portable micro devices, assisting physicians in their everyday diagnostic practice.

  12. Generalized classifier neural network.

    PubMed

    Ozyildirim, Buse Melis; Avci, Mutlu

    2013-03-01

    In this work a new radial basis function based classification neural network named as generalized classifier neural network, is proposed. The proposed generalized classifier neural network has five layers, unlike other radial basis function based neural networks such as generalized regression neural network and probabilistic neural network. They are input, pattern, summation, normalization and output layers. In addition to topological difference, the proposed neural network has gradient descent based optimization of smoothing parameter approach and diverge effect term added calculation improvements. Diverge effect term is an improvement on summation layer calculation to supply additional separation ability and flexibility. Performance of generalized classifier neural network is compared with that of the probabilistic neural network, multilayer perceptron algorithm and radial basis function neural network on 9 different data sets and with that of generalized regression neural network on 3 different data sets include only two classes in MATLAB environment. Better classification performance up to %89 is observed. Improved classification performances proved the effectivity of the proposed neural network.

  13. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization

    NASA Astrophysics Data System (ADS)

    Raghunath, N.; Faber, T. L.; Suryanarayanan, S.; Votaw, J. R.

    2009-02-01

    Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.

  14. A Physically Based Algorithm for Non-Blackbody Correction of Cloud-Top Temperature and Application to Convection Study

    NASA Technical Reports Server (NTRS)

    Wang, Chunpeng; Lou, Zhengzhao Johnny; Chen, Xiuhong; Zeng, Xiping; Tao, Wei-Kuo; Huang, Xianglei

    2014-01-01

    Cloud-top temperature (CTT) is an important parameter for convective clouds and is usually different from the 11-micrometers brightness temperature due to non-blackbody effects. This paper presents an algorithm for estimating convective CTT by using simultaneous passive [Moderate Resolution Imaging Spectroradiometer (MODIS)] and active [CloudSat 1 Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO)] measurements of clouds to correct for the non-blackbody effect. To do this, a weighting function of the MODIS 11-micrometers band is explicitly calculated by feeding cloud hydrometer profiles from CloudSat and CALIPSO retrievals and temperature and humidity profiles based on ECMWF analyses into a radiation transfer model.Among 16 837 tropical deep convective clouds observed by CloudSat in 2008, the averaged effective emission level (EEL) of the 11-mm channel is located at optical depth; approximately 0.72, with a standard deviation of 0.3. The distance between the EEL and cloud-top height determined by CloudSat is shown to be related to a parameter called cloud-top fuzziness (CTF), defined as the vertical separation between 230 and 10 dBZ of CloudSat radar reflectivity. On the basis of these findings a relationship is then developed between the CTF and the difference between MODIS 11-micrometers brightness temperature and physical CTT, the latter being the non-blackbody correction of CTT. Correction of the non-blackbody effect of CTT is applied to analyze convective cloud-top buoyancy. With this correction, about 70% of the convective cores observed by CloudSat in the height range of 6-10 km have positive buoyancy near cloud top, meaning clouds are still growing vertically, although their final fate cannot be determined by snapshot observations.

  15. Efficient fast heuristic algorithms for minimum error correction haplotyping from SNP fragments.

    PubMed

    Anaraki, Maryam Pourkamali; Sadeghi, Mehdi

    2014-01-01

    Availability of complete human genome is a crucial factor for genetic studies to explore possible association between the genome and complex diseases. Haplotype, as a set of single nucleotide polymorphisms (SNPs) on a single chromosome, is believed to contain promising data for disease association studies, detecting natural positive selection and recombination hotspots. Various computational methods for haplotype reconstruction from aligned fragment of SNPs have already been proposed. This study presents a novel approach to obtain paternal and maternal haplotypes form the SNP fragments on minimum error correction (MEC) model. Reconstructing haplotypes in MEC model is an NP-hard problem. Therefore, our proposed methods employ two fast and accurate clustering techniques as the core of their procedure to efficiently solve this ill-defined problem. The assessment of our approaches, compared to conventional methods, on two real benchmark datasets, i.e., ACE and DALY, proves the efficiency and accuracy.

  16. Simplified ASE correction algorithm for variable gain-flattened erbium-doped fiber amplifier.

    PubMed

    Mahdi, Mohd Adzir; Sheih, Shou-Jong; Adikan, Faisal Rafiq Mahamd

    2009-06-08

    We demonstrate a simplified algorithm to manifest the contribution of amplified spontaneous emission in variable gain-flattened Erbium-doped fiber amplifier (EDFA). The detected signal power at the input and output ports of EDFA comprises of both signal and noise. The generated amplified spontaneous emission from EDFA cannot be differentiated by photodetector which leads to underestimation of the targeted gain value. This gain penalty must be taken into consideration in order to obtain the accurate gain level. By taking the average gain penalty within the dynamic gain range, the targeted output power is set higher than the desired level. Thus, the errors are significantly reduced to less than 0.15 dB from 15 dB to 30 dB desired gain values.

  17. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-09-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose

  18. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms.

    PubMed

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M; Asma, Evren; Kinahan, Paul E; De Man, Bruno

    2015-10-07

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition.We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality.With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  19. Optimal threshold estimation for binary classifiers using game theory.

    PubMed

    Sanchez, Ignacio Enrique

    2016-01-01

    Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of "specificity equals sensitivity" maximizes robustness against uncertainties in the abundance of positives in nature and classification costs.

  20. Optimal threshold estimation for binary classifiers using game theory

    PubMed Central

    Sanchez, Ignacio Enrique

    2017-01-01

    Many bioinformatics algorithms can be understood as binary classifiers. They are usually compared using the area under the receiver operating characteristic ( ROC) curve. On the other hand, choosing the best threshold for practical use is a complex task, due to uncertain and context-dependent skews in the abundance of positives in nature and in the yields/costs for correct/incorrect classification. We argue that considering a classifier as a player in a zero-sum game allows us to use the minimax principle from game theory to determine the optimal operating point. The proposed classifier threshold corresponds to the intersection between the ROC curve and the descending diagonal in ROC space and yields a minimax accuracy of 1-FPR. Our proposal can be readily implemented in practice, and reveals that the empirical condition for threshold estimation of “specificity equals sensitivity” maximizes robustness against uncertainties in the abundance of positives in nature and classification costs. PMID:28003875

  1. Noise correction on LANDSAT images using a spline-like algorithm

    NASA Technical Reports Server (NTRS)

    Vijaykumar, N. L. (Principal Investigator); Dias, L. A. V.

    1985-01-01

    Many applications using LANDSAT images face a dilemma: the user needs a certain scene (for example, a flooded region), but that particular image may present interference or noise in form of horizontal stripes. During automatic analysis, this interference or noise may cause false readings of the region of interest. In order to minimize this interference or noise, many solutions are used, for instane, that of using the average (simple or weighted) values of the neighboring vertical points. In the case of high interference (more than one adjacent line lost) the method of averages may not suit the desired purpose. The solution proposed is to use a spline-like algorithm (weighted splines). This type of interpolation is simple to be computer implemented, fast, uses only four points in each interval, and eliminates the necessity of solving a linear equation system. In the normal mode of operation, the first and second derivatives of the solution function are continuous and determined by data points, as in cubic splines. It is possible, however, to impose the values of the first derivatives, in order to account for shapr boundaries, without increasing the computational effort. Some examples using the proposed method are also shown.

  2. An algorithm to correct 2D near-infrared fluorescence signals using 3D intravascular ultrasound architectural information

    NASA Astrophysics Data System (ADS)

    Mallas, Georgios; Brooks, Dana H.; Rosenthal, Amir; Vinegoni, Claudio; Calfon, Marcella A.; Razansky, R. Nika; Jaffer, Farouc A.; Ntziachristos, Vasilis

    2011-03-01

    Intravascular Near-Infrared Fluorescence (NIRF) imaging is a promising imaging modality to image vessel biology and high-risk plaques in vivo. We have developed a NIRF fiber optic catheter and have presented the ability to image atherosclerotic plaques in vivo, using appropriate NIR fluorescent probes. Our catheter consists of a 100/140 μm core/clad diameter housed in polyethylene tubing, emitting NIR laser light at a 90 degree angle compared to the fiber's axis. The system utilizes a rotational and a translational motor for true 2D imaging and operates in conjunction with a coaxial intravascular ultrasound (IVUS) device. IVUS datasets provide 3D images of the internal structure of arteries and are used in our system for anatomical mapping. Using the IVUS images, we are building an accurate hybrid fluorescence-IVUS data inversion scheme that takes into account photon propagation through the blood filled lumen. This hybrid imaging approach can then correct for the non-linear dependence of light intensity on the distance of the fluorescence region from the fiber tip, leading to quantitative imaging. The experimental and algorithmic developments will be presented and the effectiveness of the algorithm showcased with experimental results in both saline and blood-like preparations. The combined structural and molecular information obtained from these two imaging modalities are positioned to enable the accurate diagnosis of biologically high-risk atherosclerotic plaques in the coronary arteries that are responsible for heart attacks.

  3. SU-E-T-477: An Efficient Dose Correction Algorithm Accounting for Tissue Heterogeneities in LDR Brachytherapy

    SciTech Connect

    Mashouf, S; Lai, P; Karotki, A; Keller, B; Beachey, D; Pignol, J

    2014-06-01

    Purpose: Seed brachytherapy is currently used for adjuvant radiotherapy of early stage prostate and breast cancer patients. The current standard for calculation of dose surrounding the brachytherapy seeds is based on American Association of Physicist in Medicine Task Group No. 43 (TG-43 formalism) which generates the dose in homogeneous water medium. Recently, AAPM Task Group No. 186 emphasized the importance of accounting for tissue heterogeneities. This can be done using Monte Carlo (MC) methods, but it requires knowing the source structure and tissue atomic composition accurately. In this work we describe an efficient analytical dose inhomogeneity correction algorithm implemented using MIM Symphony treatment planning platform to calculate dose distributions in heterogeneous media. Methods: An Inhomogeneity Correction Factor (ICF) is introduced as the ratio of absorbed dose in tissue to that in water medium. ICF is a function of tissue properties and independent of source structure. The ICF is extracted using CT images and the absorbed dose in tissue can then be calculated by multiplying the dose as calculated by the TG-43 formalism times ICF. To evaluate the methodology, we compared our results with Monte Carlo simulations as well as experiments in phantoms with known density and atomic compositions. Results: The dose distributions obtained through applying ICF to TG-43 protocol agreed very well with those of Monte Carlo simulations as well as experiments in all phantoms. In all cases, the mean relative error was reduced by at least 50% when ICF correction factor was applied to the TG-43 protocol. Conclusion: We have developed a new analytical dose calculation method which enables personalized dose calculations in heterogeneous media. The advantages over stochastic methods are computational efficiency and the ease of integration into clinical setting as detailed source structure and tissue segmentation are not needed. University of Toronto, Natural Sciences and

  4. Simulation of co-phase error correction of optical multi-aperture imaging system based on stochastic parallel gradient decent algorithm

    NASA Astrophysics Data System (ADS)

    He, Xiaojun; Ma, Haotong; Luo, Chuanxin

    2016-10-01

    The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.

  5. [A New HAC Unsupervised Classifier Based on Spectral Harmonic Analysis].

    PubMed

    Yang, Ke-ming; Wei, Hua-feng; Shi, Gang-qiang; Sun, Yang-yang; Liu, Fei

    2015-07-01

    Hyperspectral images classification is one of the important methods to identify image information, which has great significance for feature identification, dynamic monitoring and thematic information extraction, etc. Unsupervised classification without prior knowledge is widely used in hyperspectral image classification. This article proposes a new hyperspectral images unsupervised classification algorithm based on harmonic analysis(HA), which is called the harmonic analysis classifer (HAC). First, the HAC algorithm counts the first harmonic component and draws the histogram, so it can determine the initial feature categories and the pixel of cluster centers according to the number and location of the peak. Then, the algorithm is to map the waveform information of pixels to be classified spectrum into the feature space made up of harmonic decomposition times, amplitude and phase, and the similar features can be gotten together in the feature space, these pixels will be classified according to the principle of minimum distance. Finally, the algorithm computes the Euclidean distance of these pixels between cluster center, and merges the initial classification by setting the distance threshold. so the HAC can achieve the purpose of hyperspectral images classification. The paper collects spectral curves of two feature categories, and obtains harmonic decomposition times, amplitude and phase after harmonic analysis, the distribution of HA components in the feature space verified the correctness of the HAC. While the HAC algorithm is applied to EO-1 satellite Hyperion hyperspectral image and obtains the results of classification. Comparing with the hyperspectral image classifying results of K-MEANS, ISODATA and HAC classifiers, the HAC, as a unsupervised classification method, is confirmed to have better application on hyperspectral image classification.

  6. Matrix and position correction of shuffler assays by application of the alternating conditional expectation algorithm to shuffler data

    SciTech Connect

    Pickrell, M M; Rinard, P M

    1992-01-01

    The {sup 252}Cf shuffler assays fissile uranium and plutonium using active neutron interrogation and then counting the induced delayed neutrons. Using the shuffler, we conducted over 1700 assays of 55-gal. drums with 28 different matrices and several different fissionable materials. We measured the drums to dispose the matrix and position effects on {sup 252}Cf shuffler assays. We used several neutron flux monitors during irradiation and kept statistics on the count rates of individual detector banks. The intent of these measurements was to gauge the effect of the matrix independently from the uranium assay. Although shufflers have previously been equipped neutron monitors, the functional relationship between the flux monitor sepals and the matrix-induced perturbation has been unknown. There are several flux monitors so the problem is multivariate, and the response is complicated. Conventional regression techniques cannot address complicated multivariate problems unless the underlying functional form and approximate parameter values are known in advance. Neither was available in this case. To address this problem, we used a new technique called alternating conditional expectations (ACE), which requires neither the functional relationship nor the initial parameters. The ACE algorithm develops the functional form and performs a numerical regression from only the empirical data. We applied the ACE algorithm to the shuffler-assay and flux-monitor data and developed an analytic function for the matrix correction. This function was optimized using conventional multivariate techniques. We were able to reduce the matrix-induced-bias error for homogeneous samples to 12.7%. The bias error for inhomogeneous samples was reduced to 13.5%. These results used only a few adjustable parameters compared to the number of available data points; the data were not over fit,'' but rather the results are general and robust.

  7. Classifying TDSS Stellar Variables

    NASA Astrophysics Data System (ADS)

    Amaro, Rachael Christina; Green, Paul J.; TDSS Collaboration

    2017-01-01

    The Time Domain Spectroscopic Survey (TDSS), a subprogram of SDSS-IV eBOSS, obtains classification/discovery spectra of point-source photometric variables selected from PanSTARRS and SDSS multi-color light curves regardless of object color or lightcurve shape. Tens of thousands of TDSS spectra are already available and have been spectroscopically classified both via pipeline and by visual inspection. About half of these spectra are quasars, half are stars. Our goal is to classify the stars with their correct variability types. We do this by acquiring public multi-epoch light curves for brighter stars (r<19.5mag) from the Catalina Sky Survey (CSS). We then run a number of light curve analyses from VARTOOLS, a program for analyzing astronomical time-series data, to constrain variable type both for broad statistics relevant to future surveys like the Transiting Exoplanet Survey Satellite (TESS) and the Large Synoptic Survey Telescope (LSST), and to find the inevitable exotic oddballs that warrant further follow-up. Specifically, the Lomb-Scargle Periodogram and the Box-Least Squares Method are being implemented and tested against their known variable classifications and parameters in the Catalina Surveys Periodic Variable Star Catalog. Variable star classifications include RR Lyr, close eclipsing binaries, CVs, pulsating white dwarfs, and other exotic systems. The key difference between our catalog and others is that along with the light curves, we will be using TDSS spectra to help in the classification of variable type, as spectra are rich with information allowing estimation of physical parameters like temperature, metallicity, gravity, etc. This work was supported by the SDSS Research Experience for Undergraduates program, which is funded by a grant from Sloan Foundation to the Astrophysical Research Consortium.

  8. A Portable Ground-Based Atmospheric Monitoring System (PGAMS) for the Calibration and Validation of Atmospheric Correction Algorithms Applied to Aircraft and Satellite Images

    NASA Technical Reports Server (NTRS)

    Schiller, Stephen; Luvall, Jeffrey C.; Rickman, Doug L.; Arnold, James E. (Technical Monitor)

    2000-01-01

    Detecting changes in the Earth's environment using satellite images of ocean and land surfaces must take into account atmospheric effects. As a result, major programs are underway to develop algorithms for image retrieval of atmospheric aerosol properties and atmospheric correction. However, because of the temporal and spatial variability of atmospheric transmittance it is very difficult to model atmospheric effects and implement models in an operational mode. For this reason, simultaneous in situ ground measurements of atmospheric optical properties are vital to the development of accurate atmospheric correction techniques. Presented in this paper is a spectroradiometer system that provides an optimized set of surface measurements for the calibration and validation of atmospheric correction algorithms. The Portable Ground-based Atmospheric Monitoring System (PGAMS) obtains a comprehensive series of in situ irradiance, radiance, and reflectance measurements for the calibration of atmospheric correction algorithms applied to multispectral. and hyperspectral images. The observations include: total downwelling irradiance, diffuse sky irradiance, direct solar irradiance, path radiance in the direction of the north celestial pole, path radiance in the direction of the overflying satellite, almucantar scans of path radiance, full sky radiance maps, and surface reflectance. Each of these parameters are recorded over a wavelength range from 350 to 1050 nm in 512 channels. The system is fast, with the potential to acquire the complete set of observations in only 8 to 10 minutes depending on the selected spatial resolution of the sky path radiance measurements

  9. Retrieving water surface temperature from archive LANDSAT thermal infrared data: Application of the mono-channel atmospheric correction algorithm over two freshwater reservoirs

    NASA Astrophysics Data System (ADS)

    Simon, R. N.; Tormos, T.; Danis, P.-A.

    2014-08-01

    Water surface temperature is a key element in characterizing the thermodynamics of waterbodies, and for irregularly-shaped inland reservoirs, LANDSAT thermal infrared images are the best alternative yet for the retrieval of this parameter. However, images must be corrected mainly for atmospheric effects in order to be fully exploitable. The objective of this study is to validate the mono-channel correction algorithm for single-band thermal infrared LANDSAT data as put forward by Jiménez-Muñoz et al. (2009). Two freshwater reservoirs in continental France were selected as study sites, and best use was made of all accessible image and field data. Results obtained are satisfactory and in accordance with the literature: r2 values are above 0.90 and root-mean-square error values are comprised between 1 and 2 °C. Moreover, paired Wilcoxon signed rank tests showed a highly significant difference between field and uncorrected image data, a very highly significant difference between uncorrected and corrected image data, and no significant difference between field and corrected image data. The mono-channel algorithm is hence recommended for correcting archive LANDSAT single-band thermal infrared data for inland waterbody monitoring and study.

  10. Accelerated multiple-pass moving average: a novel algorithm for baseline estimation in CE and its application to baseline correction on real-time bases.

    PubMed

    Solis, Alejandro; Rex, Mathew; Campiglia, Andres D; Sojo, Pedro

    2007-04-01

    We present a novel algorithm for baseline estimation in CE. The new algorithm which we have named as accelerated multiple-pass moving average (AMPMA) is combined to three preexisting low-pass filters, spike-removal, moving average, and multi-pass moving average filter, to achieve real-time baseline correction with commercial instrumentation. The successful performance of AMPMA is demonstrated with simulated and experimental data. Straightforward comparison of experimental data clearly shows the improvement AMPMA provides to the linear fitting, LOD, and accuracy (absolute error) of CE analysis.

  11. A dose calculation algorithm with correction for proton-nucleus interactions in non-water materials for proton radiotherapy treatment planning

    NASA Astrophysics Data System (ADS)

    Inaniwa, T.; Kanematsu, N.; Sato, S.; Kohno, R.

    2016-01-01

    In treatment planning for proton radiotherapy, the dose measured in water is applied to the patient dose calculation with density scaling by stopping power ratio {ρ\\text{S}} . Since the body tissues are chemically different from water, this approximation may cause dose calculation errors, especially due to differences in nuclear interactions. We proposed and validated an algorithm for correcting these errors. The dose in water is decomposed into three constituents according to the physical interactions of protons in water: the dose from primary protons continuously slowing down by electromagnetic interactions, the dose from protons scattered by elastic and/or inelastic interactions, and the dose resulting from nonelastic interactions. The proportions of the three dose constituents differ between body tissues and water. We determine correction factors for the proportion of dose constituents with Monte Carlo simulations in various standard body tissues, and formulated them as functions of their {ρ\\text{S}} for patient dose calculation. The influence of nuclear interactions on dose was assessed by comparing the Monte Carlo simulated dose and the uncorrected dose in common phantom materials. The influence around the Bragg peak amounted to  -6% for polytetrafluoroethylene and 0.3% for polyethylene. The validity of the correction method was confirmed by comparing the simulated and corrected doses in the materials. The deviation was below 0.8% for all materials. The accuracy of the correction factors derived with Monte Carlo simulations was separately verified through irradiation experiments with a 235 MeV proton beam using common phantom materials. The corrected doses agreed with the measurements within 0.4% for all materials except graphite. The influence on tumor dose was assessed in a prostate case. The dose reduction in the tumor was below 0.5%. Our results verify that this algorithm is practical and accurate for proton radiotherapy treatment planning, and

  12. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    PubMed

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-02-26

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  13. Parallelizable flood fill algorithm and corrective interface tracking approach applied to the simulation of multiple finite size bubbles merging with a free surface

    NASA Astrophysics Data System (ADS)

    Lafferty, Nathan; Badreddine, Hassan; Niceno, Bojan; Prasser, Horst-Michael

    2015-11-01

    A parallelizable flood fill algorithm is developed for identifying and tracking closed regions of fluids, dispersed phases, in CFD simulations of multiphase flows. It is used in conjunction with a newly developed method, corrective interface tracking, for simulating finite size dispersed bubbly flows in which the bubbles are too small relative to the grid to be simulated accurately with interface tracking techniques and too large relative to the grid for Lagrangian particle tracking techniques. The latter situation arising if local bubble induced turbulence is resolved, or modeled with LES. With corrective interface tracking the governing equations are solved on a static Eulerian grid. A correcting force, derived from empirical correlation based hydrodynamic forces, is applied to the bubble which is then advected using interface tracking techniques. This method results in accurate fluid-gas two-way coupling, bubble shapes, and terminal rise velocities. The flood fill algorithm and corrective interface tracking technique are applied to an air/water simulation of multiple bubbles rising and merging with a free surface. They are then validated against the same simulation performed using only interface tracking with a much finer grid.

  14. Maximum margin Bayesian network classifiers.

    PubMed

    Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian

    2012-03-01

    We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.

  15. Quadrupole Alignment and Trajectory Correction for Future Linear Colliders: SLC Tests of a Dispersion-Free Steering Algorithm

    SciTech Connect

    Assmann, R

    2004-06-08

    The feasibility of future linear colliders depends on achieving very tight alignment and steering tolerances. All proposals (NLC, JLC, CLIC, TESLA and S-BAND) currently require a total emittance growth in the main linac of less than 30-100% [1]. This should be compared with a 100% emittance growth in the much smaller SLC linac [2]. Major advances in alignment and beam steering techniques beyond those used in the SLC are necessary for the next generation of linear colliders. In this paper, we present an experimental study of quadrupole alignment with a dispersion-free steering algorithm. A closely related method (wakefield-free steering) takes into account wakefield effects [3]. However, this method can not be studied at the SLC. The requirements for future linear colliders lead to new and unconventional ideas about alignment and beam steering. For example, no dipole correctors are foreseen for the standard trajectory correction in the NLC [4]; beam steering will be done by moving the quadrupole positions with magnet movers. This illustrates the close symbiosis between alignment, beam steering and beam dynamics that will emerge. It is no longer possible to consider the accelerator alignment as static with only a few surveys and realignments per year. The alignment in future linear colliders will be a dynamic process in which the whole linac, with thousands of beam-line elements, is aligned in a few hours or minutes, while the required accuracy of about 5 pm for the NLC quadrupole alignment [4] is a factor of 20 higher than in existing accelerators. The major task in alignment and steering is the accurate determination of the optimum beam-line position. Ideally one would like all elements to be aligned along a straight line. However, this is not practical. Instead a ''smooth curve'' is acceptable as long as its wavelength is much longer than the betatron wavelength of the accelerated beam. Conventional alignment methods are limited in accuracy by errors in the survey

  16. Algorithm for X-ray scatter, beam-hardening, and beam profile correction in diagnostic (kilovoltage) and treatment (megavoltage) cone beam CT.

    PubMed

    Maltz, Jonathan S; Gangadharan, Bijumon; Bose, Supratik; Hristov, Dimitre H; Faddegon, Bruce A; Paidi, Ajay; Bani-Hashemi, Ali R

    2008-12-01

    Quantitative reconstruction of cone beam X-ray computed tomography (CT) datasets requires accurate modeling of scatter, beam-hardening, beam profile, and detector response. Typically, commercial imaging systems use fast empirical corrections that are designed to reduce visible artifacts due to incomplete modeling of the image formation process. In contrast, Monte Carlo (MC) methods are much more accurate but are relatively slow. Scatter kernel superposition (SKS) methods offer a balance between accuracy and computational practicality. We show how a single SKS algorithm can be employed to correct both kilovoltage (kV) energy (diagnostic) and megavoltage (MV) energy (treatment) X-ray images. Using MC models of kV and MV imaging systems, we map intensities recorded on an amorphous silicon flat panel detector to water-equivalent thicknesses (WETs). Scattergrams are derived from acquired projection images using scatter kernels indexed by the local WET values and are then iteratively refined using a scatter magnitude bounding scheme that allows the algorithm to accommodate the very high scatter-to-primary ratios encountered in kV imaging. The algorithm recovers radiological thicknesses to within 9% of the true value at both kV and megavolt energies. Nonuniformity in CT reconstructions of homogeneous phantoms is reduced by an average of 76% over a wide range of beam energies and phantom geometries.

  17. Adaptive Bayes classifiers for remotely sensed data

    NASA Technical Reports Server (NTRS)

    Raulston, H. S.; Pace, M. O.; Gonzalez, R. C.

    1975-01-01

    An algorithm is developed for a learning, adaptive, statistical pattern classifier for remotely sensed data. The estimation procedure consists of two steps: (1) an optimal stochastic approximation of the parameters of interest, and (2) a projection of the parameters in time and space. The results reported are for Gaussian data in which the mean vector of each class may vary with time or position after the classifier is trained.

  18. Dynamic system classifier

    NASA Astrophysics Data System (ADS)

    Pumpe, Daniel; Greiner, Maksim; Müller, Ewald; Enßlin, Torsten A.

    2016-07-01

    Stochastic differential equations describe well many physical, biological, and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time-dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of the DSC to oscillation processes with a time-dependent frequency ω (t ) and damping factor γ (t ) . Although real systems might be more complex, this simple oscillator captures many characteristic features. The ω and γ time lines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiments show that such classifiers perform well even in the low signal-to-noise regime.

  19. Dynamic system classifier.

    PubMed

    Pumpe, Daniel; Greiner, Maksim; Müller, Ewald; Enßlin, Torsten A

    2016-07-01

    Stochastic differential equations describe well many physical, biological, and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time-dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of the DSC to oscillation processes with a time-dependent frequency ω(t) and damping factor γ(t). Although real systems might be more complex, this simple oscillator captures many characteristic features. The ω and γ time lines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiments show that such classifiers perform well even in the low signal-to-noise regime.

  20. Correction of Faulty Sensors in Phased Array Radars Using Symmetrical Sensor Failure Technique and Cultural Algorithm with Differential Evolution

    PubMed Central

    Khan, S. U.; Qureshi, I. M.; Zaman, F.; Shoaib, B.; Naveed, A.; Basit, A.

    2014-01-01

    Three issues regarding sensor failure at any position in the antenna array are discussed. We assume that sensor position is known. The issues include raise in sidelobe levels, displacement of nulls from their original positions, and diminishing of null depth. The required null depth is achieved by making the weight of symmetrical complement sensor passive. A hybrid method based on memetic computing algorithm is proposed. The hybrid method combines the cultural algorithm with differential evolution (CADE) which is used for the reduction of sidelobe levels and placement of nulls at their original positions. Fitness function is used to minimize the error between the desired and estimated beam patterns along with null constraints. Simulation results for various scenarios have been given to exhibit the validity and performance of the proposed algorithm. PMID:24688440

  1. Hierarchical Pattern Classifier

    NASA Technical Reports Server (NTRS)

    Yates, Gigi L.; Eberlein, Susan J.

    1992-01-01

    Hierarchical pattern classifier reduces number of comparisons between input and memory vectors without reducing detail of final classification by dividing classification process into coarse-to-fine hierarchy that comprises first "grouping" step and second classification step. Three-layer neural network reduces computation further by reducing number of vector dimensions in processing. Concept applicable to pattern-classification problems with need to reduce amount of computation necessary to classify, identify, or match patterns to desired degree of resolution.

  2. Practical Atmospheric Correction Algorithms for a Multi-Spectral Sensor From the Visible Through the Thermal Spectral Regions

    SciTech Connect

    Borel, C.C.; Villeneuve, P.V.; Clodium, W.B.; Szymenski, J.J.; Davis, A.B.

    1999-04-04

    Deriving information about the Earth's surface requires atmospheric corrections of the measured top-of-the-atmosphere radiances. One possible path is to use atmospheric radiative transfer codes to predict how the radiance leaving the ground is affected by the scattering and attenuation. In practice the atmosphere is usually not well known and thus it is necessary to use more practical methods. The authors will describe how to find dark surfaces, estimate the atmospheric optical depth, estimate path radiance and identify thick clouds using thresholds on reflectance and NDVI and columnar water vapor. The authors describe a simple method to correct a visible channel contaminated by a thin cirrus clouds.

  3. The Rocchio classifier and second generation wavelets

    NASA Astrophysics Data System (ADS)

    Carter, Patricia H.

    2007-04-01

    Classification and characterization of text is of ever growing importance in defense and national security. The text classification task is an instance of classification using sparse features residing in a high dimensional feature space. Two standard (of a wide selection of available) algorithms for this task are the naive Bayes classifier and the Rocchio linear classifier. Naive Bayes classifiers are widely applied; the Rocchio algorithm is primarily used in document classification and information retrieval. Both these classifiers are popular because of their simplicity and ease of application, computational speed and reasonable performance. One aspect of the Rocchio approach, inherited from its information retrieval origin, is that it explicitly uses both positive and negative models. Parameters have been introduced which make it adaptive to the particulars of the corpora of interest and thereby improve its performance. The ideas inherent in these classifiers and in second generation wavelets can be recombined into new algorithms for classification. An example is a classifier using second generation wavelet-like functions for class probes that mimic the Rocchio positive template - negative template approach.

  4. A correction factor for ablation algorithms assuming deviations of Lambert-Beer's law with a Gaussian-profile beam

    NASA Astrophysics Data System (ADS)

    Rodríguez-Marín, Francisco; Anera, Rosario G.; Alarcón, Aixa; Hita, E.; Jiménez, J. R.

    2012-04-01

    In this work, we propose an adjustment factor to be considered in ablation algorithms used in refractive surgery. This adjustment factor takes into account potential deviations of Lambert-Beer's law and the characteristics of a Gaussian-profile beam. To check whether the adjustment factor deduced is significant for visual function, we applied it to the paraxial Munnerlyn formula and found that it significantly influences the post-surgical corneal radius and p-factor. The use of the adjustment factor can help reduce the discrepancies in corneal shape between the real data and corneal shape expected when applying laser ablation algorithms.

  5. Dimensionality Reduction Through Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.; Tumer, Kagan; Norwig, Peter (Technical Monitor)

    1999-01-01

    In data mining, one often needs to analyze datasets with a very large number of attributes. Performing machine learning directly on such data sets is often impractical because of extensive run times, excessive complexity of the fitted model (often leading to overfitting), and the well-known "curse of dimensionality." In practice, to avoid such problems, feature selection and/or extraction are often used to reduce data dimensionality prior to the learning step. However, existing feature selection/extraction algorithms either evaluate features by their effectiveness across the entire data set or simply disregard class information altogether (e.g., principal component analysis). Furthermore, feature extraction algorithms such as principal components analysis create new features that are often meaningless to human users. In this article, we present input decimation, a method that provides "feature subsets" that are selected for their ability to discriminate among the classes. These features are subsequently used in ensembles of classifiers, yielding results superior to single classifiers, ensembles that use the full set of features, and ensembles based on principal component analysis on both real and synthetic datasets.

  6. Energy-Efficient Neuromorphic Classifiers.

    PubMed

    Martí, Daniel; Rigotti, Mattia; Seok, Mingoo; Fusi, Stefano

    2016-10-01

    Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. The energy consumptions promised by neuromorphic engineering are extremely low, comparable to those of the nervous system. Until now, however, the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, thereby obfuscating a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. Specifically, we provide a set of general prescriptions to enable the practical implementation of neural architectures that compete with state-of-the-art classifiers. We also show that the energy consumption of these architectures, realized on the IBM chip, is typically two or more orders of magnitude lower than that of conventional digital machines implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and has significant advantages over conventional digital devices when energy consumption is considered.

  7. Analysis of vegetation by the application of a physically-based atmospheric correction algorithm to OLI data: a case study of Leonessa Municipality, Italy

    NASA Astrophysics Data System (ADS)

    Mei, Alessandro; Manzo, Ciro; Petracchini, Francesco; Bassani, Cristiana

    2016-04-01

    Remote sensing techniques allow to estimate vegetation parameters related to large areas for forest health evaluation and biomass estimation. Moreover, the parametrization of specific indices such as Normalized Difference Vegetation Index (NDVI) allows to study biogeochemical cycles and radiative energy transfer processes between soil/vegetation and atmosphere. This paper focuses on the evaluation of vegetation cover analysis in Leonessa Municipality, Latium Region (Italy) by the use of 2015 Landsat 8 applying the OLI@CRI (OLI ATmospherically Corrected Reflectance Imagery) algorithm developed following the procedure described in Bassani et al. 2015. The OLI@CRI is based on 6SV radiative transfer model (Kotchenova et al., 2006) ables to simulate the radiative field in the atmosphere-earth coupled system. NDVI was derived from the OLI corrected image. This index, widely used for biomass estimation and vegetation analysis cover, considers the sensor channels falling in the near infrared and red spectral regions which are sensitive to chlorophyll absorption and cell structure. The retrieved product was then spatially resampled at MODIS image resolution and then validated by the NDVI of MODIS considered as reference. The physically-based OLI@CRI algorithm also provides the incident solar radiation at ground at the acquisition time by 6SV simulation. Thus, the OLI@CRI algorithm completes the remote sensing dataset required for a comprehensive analysis of the sub-regional biomass production by using data of the new generation remote sensing sensor and an atmospheric radiative transfer model. If the OLI@CRI algorithm is applied to a temporal series of OLI data, the influence of the solar radiation on the above-ground vegetation can be analysed as well as vegetation index variation.

  8. Decision boundary feature selection for non-parametric classifier

    NASA Technical Reports Server (NTRS)

    Lee, Chulhee; Landgrebe, David A.

    1991-01-01

    Feature selection has been one of the most important topics in pattern recognition. Although many authors have studied feature selection for parametric classifiers, few algorithms are available for feature selection for nonparametric classifiers. In this paper we propose a new feature selection algorithm based on decision boundaries for nonparametric classifiers. We first note that feature selection for pattern recognition is equivalent to retaining 'discriminantly informative features', and a discriminantly informative feature is related to the decision boundary. A procedure to extract discriminantly informative features based on a decision boundary for nonparametric classification is proposed. Experiments show that the proposed algorithm finds effective features for the nonparametric classifier with Parzen density estimation.

  9. Depth-correction algorithm that improves optical quantification of large breast lesions imaged by diffuse optical tomography

    PubMed Central

    Tavakoli, Behnoosh; Zhu, Quing

    2011-01-01

    Optical quantification of large lesions imaged with diffuse optical tomography in reflection geometry is depth dependence due to the exponential decay of photon density waves. We introduce a depth-correction method that incorporates the target depth information provided by coregistered ultrasound. It is based on balancing the weight matrix, using the maximum singular values of the target layers in depth without changing the forward model. The performance of the method is evaluated using phantom targets and 10 clinical cases of larger malignant and benign lesions. The results for the homogenous targets demonstrate that the location error of the reconstructed maximum absorption coefficient is reduced to the range of the reconstruction mesh size for phantom targets. Furthermore, the uniformity of absorption distribution inside the lesions improve about two times and the median of the absorption increases from 60 to 85% of its maximum compared to no depth correction. In addition, nonhomogenous phantoms are characterized more accurately. Clinical examples show a similar trend as the phantom results and demonstrate the utility of the correction method for improving lesion quantification. PMID:21639570

  10. Recognition Using Hybrid Classifiers.

    PubMed

    Osadchy, Margarita; Keren, Daniel; Raviv, Dolev

    2016-04-01

    A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply.

  11. Improvement of Image Quality and Diagnostic Performance by an Innovative Motion-Correction Algorithm for Prospectively ECG Triggered Coronary CT Angiography

    PubMed Central

    Lu, Bin; Yan, Hong-Bing; Mu, Chao-Wei; Gao, Yang; Hou, Zhi-Hui; Wang, Zhi-Qiang; Liu, Kun; Parinella, Ashley H.; Leipsic, Jonathon A.

    2015-01-01

    Objective To investigate the effect of a novel motion-correction algorithm (Snap-short Freeze, SSF) on image quality and diagnostic accuracy in patients undergoing prospectively ECG-triggered CCTA without administering rate-lowering medications. Materials and Methods Forty-six consecutive patients suspected of CAD prospectively underwent CCTA using prospective ECG-triggering without rate control and invasive coronary angiography (ICA). Image quality, interpretability, and diagnostic performance of SSF were compared with conventional multisegment reconstruction without SSF, using ICA as the reference standard. Results All subjects (35 men, 57.6 ± 8.9 years) successfully underwent ICA and CCTA. Mean heart rate was 68.8±8.4 (range: 50–88 beats/min) beats/min without rate controlling medications during CT scanning. Overall median image quality score (graded 1–4) was significantly increased from 3.0 to 4.0 by the new algorithm in comparison to conventional reconstruction. Overall interpretability was significantly improved, with a significant reduction in the number of non-diagnostic segments (690 of 694, 99.4% vs 659 of 694, 94.9%; P<0.001). However, only the right coronary artery (RCA) showed a statistically significant difference (45 of 46, 97.8% vs 35 of 46, 76.1%; P = 0.004) on a per-vessel basis in this regard. Diagnostic accuracy for detecting ≥50% stenosis was improved using the motion-correction algorithm on per-vessel [96.2% (177/184) vs 87.0% (160/184); P = 0.002] and per-segment [96.1% (667/694) vs 86.6% (601/694); P <0.001] levels, but there was not a statistically significant improvement on a per-patient level [97.8 (45/46) vs 89.1 (41/46); P = 0.203]. By artery analysis, diagnostic accuracy was improved only for the RCA [97.8% (45/46) vs 78.3% (36/46); P = 0.007]. Conclusion The intracycle motion correction algorithm significantly improved image quality and diagnostic interpretability in patients undergoing CCTA with prospective ECG triggering and

  12. A fuzzy classifier system for process control

    NASA Technical Reports Server (NTRS)

    Karr, C. L.; Phillips, J. C.

    1994-01-01

    A fuzzy classifier system that discovers rules for controlling a mathematical model of a pH titration system was developed by researchers at the U.S. Bureau of Mines (USBM). Fuzzy classifier systems successfully combine the strengths of learning classifier systems and fuzzy logic controllers. Learning classifier systems resemble familiar production rule-based systems, but they represent their IF-THEN rules by strings of characters rather than in the traditional linguistic terms. Fuzzy logic is a tool that allows for the incorporation of abstract concepts into rule based-systems, thereby allowing the rules to resemble the familiar 'rules-of-thumb' commonly used by humans when solving difficult process control and reasoning problems. Like learning classifier systems, fuzzy classifier systems employ a genetic algorithm to explore and sample new rules for manipulating the problem environment. Like fuzzy logic controllers, fuzzy classifier systems encapsulate knowledge in the form of production rules. The results presented in this paper demonstrate the ability of fuzzy classifier systems to generate a fuzzy logic-based process control system.

  13. Classifying threats with a 14-MeV neutron interrogation system.

    PubMed

    Strellis, Dan; Gozani, Tsahi

    2005-01-01

    SeaPODDS (Sea Portable Drug Detection System) is a non-intrusive tool for detecting concealed threats in hidden compartments of maritime vessels. This system consists of an electronic neutron generator, a gamma-ray detector, a data acquisition computer, and a laptop computer user-interface. Although initially developed to detect narcotics, recent algorithm developments have shown that the system is capable of correctly classifying a threat into one of four distinct categories: narcotic, explosive, chemical weapon, or radiological dispersion device (RDD). Detection of narcotics, explosives, and chemical weapons is based on gamma-ray signatures unique to the chemical elements. Elements are identified by their characteristic prompt gamma-rays induced by fast and thermal neutrons. Detection of RDD is accomplished by detecting gamma-rays emitted by common radioisotopes and nuclear reactor fission products. The algorithm phenomenology for classifying threats into the proper categories is presented here.

  14. Number in Classifier Languages

    ERIC Educational Resources Information Center

    Nomoto, Hiroki

    2013-01-01

    Classifier languages are often described as lacking genuine number morphology and treating all common nouns, including those conceptually count, as an unindividuated mass. This study argues that neither of these popular assumptions is true, and presents new generalizations and analyses gained by abandoning them. I claim that no difference exists…

  15. Classifying Cereal Data

    Cancer.gov

    The DSQ includes questions about cereal intake and allows respondents up to two responses on which cereals they consume. We classified each cereal reported first by hot or cold, and then along four dimensions: density of added sugars, whole grains, fiber, and calcium.

  16. Classifying Adolescent Perfectionists

    ERIC Educational Resources Information Center

    Rice, Kenneth G.; Ashby, Jeffrey S.; Gilman, Rich

    2011-01-01

    A large school-based sample of 9th-grade adolescents (N = 875) completed the Almost Perfect Scale-Revised (APS-R; Slaney, Mobley, Trippi, Ashby, & Johnson, 1996). Decision rules and cut-scores were developed and replicated that classify adolescents as one of two kinds of perfectionists (adaptive or maladaptive) or as nonperfectionists. A…

  17. Global clear-sky surface skin temperature from multiple satellites using a single-channel algorithm with angular anisotropy corrections

    NASA Astrophysics Data System (ADS)

    Scarino, Benjamin R.; Minnis, Patrick; Chee, Thad; Bedka, Kristopher M.; Yost, Christopher R.; Palikonda, Rabindra

    2017-01-01

    Surface skin temperature (Ts) is an important parameter for characterizing the energy exchange at the ground/water-atmosphere interface. The Satellite ClOud and Radiation Property retrieval System (SatCORPS) employs a single-channel thermal-infrared (TIR) method to retrieve Ts over clear-sky land and ocean surfaces from data taken by geostationary Earth orbit (GEO) and low Earth orbit (LEO) satellite imagers. GEO satellites can provide somewhat continuous estimates of Ts over the diurnal cycle in non-polar regions, while polar Ts retrievals from LEO imagers, such as the Advanced Very High Resolution Radiometer (AVHRR), can complement the GEO measurements. The combined global coverage of remotely sensed Ts, along with accompanying cloud and surface radiation parameters, produced in near-realtime and from historical satellite data, should be beneficial for both weather and climate applications. For example, near-realtime hourly Ts observations can be assimilated in high-temporal-resolution numerical weather prediction models and historical observations can be used for validation or assimilation of climate models. Key drawbacks to the utility of TIR-derived Ts data include the limitation to clear-sky conditions, the reliance on a particular set of analyses/reanalyses necessary for atmospheric corrections, and the dependence on viewing and illumination angles. Therefore, Ts validation with established references is essential, as is proper evaluation of Ts sensitivity to atmospheric correction source.This article presents improvements on the NASA Langley GEO satellite and AVHRR TIR-based Ts product that is derived using a single-channel technique. The resulting clear-sky skin temperature values are validated with surface references and independent satellite products. Furthermore, an empirically adjusted theoretical model of satellite land surface temperature (LST) angular anisotropy is tested to improve satellite LST retrievals. Application of the anisotropic correction

  18. SU-E-I-05: A Correction Algorithm for Kilovoltage Cone-Beam Computed Tomography Dose Calculations in Cervical Cancer Patients

    SciTech Connect

    Zhang, J; Zhang, W; Lu, J

    2015-06-15

    Purpose: To investigate the accuracy and feasibility of dose calculations using kilovoltage cone beam computed tomography in cervical cancer radiotherapy using a correction algorithm. Methods: The Hounsfield units (HU) and electron density (HU-density) curve was obtained for both planning CT (pCT) and kilovoltage cone beam CT (CBCT) using a CIRS-062 calibration phantom. The pCT and kV-CBCT images have different HU values, and if the HU-density curve of CBCT was directly used to calculate dose in CBCT images may have a deviation on dose distribution. It is necessary to normalize the different HU values between pCT and CBCT. A HU correction algorithm was used for CBCT images (cCBCT). Fifteen intensity-modulated radiation therapy (IMRT) plans of cervical cancer were chosen, and the plans were transferred to the pCT and cCBCT data sets without any changes for dose calculations. Phantom and patient studies were carried out. The dose differences and dose distributions were compared between cCBCT plan and pCT plan. Results: The HU number of CBCT was measured by several times, and the maximum change was less than 2%. To compare with pCT, the CBCT and cCBCT has a discrepancy, the dose differences in CBCT and cCBCT images were 2.48%±0.65% (range: 1.3%∼3.8%) and 0.48%±0.21% (range: 0.1%∼0.82%) for phantom study, respectively. For dose calculation in patient images, the dose differences were 2.25%±0.43% (range: 1.4%∼3.4%) and 0.63%±0.35% (range: 0.13%∼0.97%), respectively. And for the dose distributions, the passing rate of cCBCT was higher than the CBCTs. Conclusion: The CBCT image for dose calculation is feasible in cervical cancer radiotherapy, and the correction algorithm offers acceptable accuracy. It will become a useful tool for adaptive radiation therapy.

  19. Algorithm for x-ray beam hardening and scatter correction in low-dose cone-beam CT: phantom studies

    NASA Astrophysics Data System (ADS)

    Liu, Wenlei; Rong, Junyan; Gao, Peng; Liao, Qimei; Lu, HongBing

    2016-03-01

    X-ray scatter poses a significant limitation to image quality in cone-beam CT (CBCT), as well as beam hardening, resulting in image artifacts, contrast reduction, and lack of CT number accuracy. Meanwhile the x-ray radiation dose is also non-ignorable. Considerable scatter or beam hardening correction methods have been developed, independently, and rarely combined with low-dose CT reconstruction. In this paper, we combine scatter suppression with beam hardening correction for sparse-view CT reconstruction to improve CT image quality and reduce CT radiation. Firstly, scatter was measured, estimated, and removed using measurement-based methods, assuming that signal in the lead blocker shadow is only attributable to x-ray scatter. Secondly, beam hardening was modeled by estimating an equivalent attenuation coefficient at the effective energy, which was integrated into the forward projector of the algebraic reconstruction technique (ART). Finally, the compressed sensing (CS) iterative reconstruction is carried out for sparse-view CT reconstruction to reduce the CT radiation. Preliminary Monte Carlo simulated experiments indicate that with only about 25% of conventional dose, our method reduces the magnitude of cupping artifact by a factor of 6.1, increases the contrast by a factor of 1.4 and the CNR by a factor of 15. The proposed method could provide good reconstructed image from a few view projections, with effective suppression of artifacts caused by scatter and beam hardening, as well as reducing the radiation dose. With this proposed framework and modeling, it may provide a new way for low-dose CT imaging.

  20. Improvement of transport-corrected scattering stability and performance using a Jacobi inscatter algorithm for 2D-MOC

    DOE PAGES

    Stimpson, Shane; Collins, Benjamin; Kochunas, Brendan

    2017-03-10

    The MPACT code, being developed collaboratively by the University of Michigan and Oak Ridge National Laboratory, is the primary deterministic neutron transport solver being deployed within the Virtual Environment for Reactor Applications (VERA) as part of the Consortium for Advanced Simulation of Light Water Reactors (CASL). In many applications of the MPACT code, transport-corrected scattering has proven to be an obstacle in terms of stability, and considerable effort has been made to try to resolve the convergence issues that arise from it. Most of the convergence problems seem related to the transport-corrected cross sections, particularly when used in the 2Dmore » method of characteristics (MOC) solver, which is the focus of this work. Here in this paper, the stability and performance of the 2-D MOC solver in MPACT is evaluated for two iteration schemes: Gauss-Seidel and Jacobi. With the Gauss-Seidel approach, as the MOC solver loops over groups, it uses the flux solution from the previous group to construct the inscatter source for the next group. Alternatively, the Jacobi approach uses only the fluxes from the previous outer iteration to determine the inscatter source for each group. Consequently for the Jacobi iteration, the loop over groups can be moved from the outermost loop$-$as is the case with the Gauss-Seidel sweeper$-$to the innermost loop, allowing for a substantial increase in efficiency by minimizing the overhead of retrieving segment, region, and surface index information from the ray tracing data. Several test problems are assessed: (1) Babcock & Wilcox 1810 Core I, (2) Dimple S01A-Sq, (3) VERA Progression Problem 5a, and (4) VERA Problem 2a. The Jacobi iteration exhibits better stability than Gauss-Seidel, allowing for converged solutions to be obtained over a much wider range of iteration control parameters. Additionally, the MOC solve time with the Jacobi approach is roughly 2.0-2.5× faster per sweep. While the performance and stability of

  1. A simulation based approach to optimize inventory replenishment with RAND algorithm: An extended study of corrected demand using Holt's method for textile industry

    NASA Astrophysics Data System (ADS)

    Morshed, Mohammad Sarwar; Kamal, Mostafa Mashnoon; Khan, Somaiya Islam

    2016-07-01

    Inventory has been a major concern in supply chain and numerous researches have been done lately on inventory control which brought forth a number of methods that efficiently manage inventory and related overheads by reducing cost of replenishment. This research is aimed towards providing a better replenishment policy in case of multi-product, single supplier situations for chemical raw materials of textile industries in Bangladesh. It is assumed that industries currently pursue individual replenishment system. The purpose is to find out the optimum ideal cycle time and individual replenishment cycle time of each product for replenishment that will cause lowest annual holding and ordering cost, and also find the optimum ordering quantity. In this paper indirect grouping strategy has been used. It is suggested that indirect grouping Strategy outperforms direct grouping strategy when major cost is high. An algorithm by Kaspi and Rosenblatt (1991) called RAND is exercised for its simplicity and ease of application. RAND provides an ideal cycle time (T) for replenishment and integer multiplier (ki) for individual items. Thus the replenishment cycle time for each product is found as T×ki. Firstly, based on data, a comparison between currently prevailing (individual) process and RAND is provided that uses the actual demands which presents 49% improvement in total cost of replenishment. Secondly, discrepancies in demand is corrected by using Holt's method. However, demands can only be forecasted one or two months into the future because of the demand pattern of the industry under consideration. Evidently, application of RAND with corrected demand display even greater improvement. The results of this study demonstrates that cost of replenishment can be significantly reduced by applying RAND algorithm and exponential smoothing models.

  2. The Algorithm Theoretical Basis Document for the Atmospheric Delay Correction to GLAS Laser Altimeter Ranges. Volume 8

    NASA Technical Reports Server (NTRS)

    Herring, Thomas A.; Quinn, Katherine J.

    2012-01-01

    NASA s Ice, Cloud, and Land Elevation Satellite (ICESat) mission will be launched late 2001. It s primary instrument is the Geoscience Laser Altimeter System (GLAS) instrument. The main purpose of this instrument is to measure elevation changes of the Greenland and Antarctic icesheets. To accurately measure the ranges it is necessary to correct for the atmospheric delay of the laser pulses. The atmospheric delay depends on the integral of the refractive index along the path that the laser pulse travels through the atmosphere. The refractive index of air at optical wavelengths is a function of density and molecular composition. For ray paths near zenith and closed form equations for the refractivity, the atmospheric delay can be shown to be directly related to surface pressure and total column precipitable water vapor. For ray paths off zenith a mapping function relates the delay to the zenith delay. The closed form equations for refractivity recommended by the International Union of Geodesy and Geophysics (IUGG) are optimized for ground based geodesy techniques and in the next section we will consider whether these equations are suitable for satellite laser altimetry.

  3. Crystal and molecular structures of selected organic and organometallic compounds and an algorithm for empirical absorption correction

    SciTech Connect

    Karcher, B.

    1981-10-01

    Cr(CO)/sub 5/(SCMe/sub 2/) crystallizes in the monoclinic space group P2/sub 1//a with a = 10.468(8), b = 11.879(5), c = 9.575(6) A, and ..beta.. = 108.14(9)/sup 0/, with an octahedral coordination around the chromium atom. PSN/sub 3/C/sub 6/H/sub 12/ crystallizes in the monoclinic space group P2/sub 1//n with a = 10.896(1), b = 11.443(1), c = 7.288(1) A, and ..beta.. = 104.45(1)/sup 0/. Each of the five-membered rings in this structure contains a carbon atom which is puckered toward the sulfur and out of the nearly planar arrays of the remaining ring atoms. (RhO/sub 4/N/sub 4/C/sub 48/H/sub 56/)/sup +/(BC/sub 24/H/sub 20/)/sup -/.1.5NC/sub 2/H/sub 3/ crystallizes in the triclinic space group P1 with a = 17.355(8), b = 21.135(10), c = 10.757(5) A, ..cap alpha.. = 101.29(5), ..beta.. = 98.36(5), and ..gamma.. = 113.92(4)/sup 0/. Each Rh cation complex is a monomer. MoP/sub 2/O/sub 10/C/sub 16/H/sub 22/ crystallizes in the monoclinic space group P2/sub 1//c with a = 12.220(3), b = 9.963(2), c = 20.150(6) A, and ..beta.. = 103.01(3)/sup 0/. The molybdenum atom occupies the axial position of the six-membered ring of each of the two phosphorinane ligands. An empirical absorption correction program was written.

  4. Classifying Facial Actions

    PubMed Central

    Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.

    2010-01-01

    The Facial Action Coding System (FACS) [23] is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284

  5. A General Fuzzy Cerebellar Model Neural Network Multidimensional Classifier Using Intuitionistic Fuzzy Sets for Medical Identification.

    PubMed

    Zhao, Jing; Lin, Lo-Yi; Lin, Chih-Min

    2016-01-01

    The diversity of medical factors makes the analysis and judgment of uncertainty one of the challenges of medical diagnosis. A well-designed classification and judgment system for medical uncertainty can increase the rate of correct medical diagnosis. In this paper, a new multidimensional classifier is proposed by using an intelligent algorithm, which is the general fuzzy cerebellar model neural network (GFCMNN). To obtain more information about uncertainty, an intuitionistic fuzzy linguistic term is employed to describe medical features. The solution of classification is obtained by a similarity measurement. The advantages of the novel classifier proposed here are drawn out by comparing the same medical example under the methods of intuitionistic fuzzy sets (IFSs) and intuitionistic fuzzy cross-entropy (IFCE) with different score functions. Cross verification experiments are also taken to further test the classification ability of the GFCMNN multidimensional classifier. All of these experimental results show the effectiveness of the proposed GFCMNN multidimensional classifier and point out that it can assist in supporting for correct medical diagnoses associated with multiple categories.

  6. A General Fuzzy Cerebellar Model Neural Network Multidimensional Classifier Using Intuitionistic Fuzzy Sets for Medical Identification

    PubMed Central

    Zhao, Jing; Lin, Lo-Yi

    2016-01-01

    The diversity of medical factors makes the analysis and judgment of uncertainty one of the challenges of medical diagnosis. A well-designed classification and judgment system for medical uncertainty can increase the rate of correct medical diagnosis. In this paper, a new multidimensional classifier is proposed by using an intelligent algorithm, which is the general fuzzy cerebellar model neural network (GFCMNN). To obtain more information about uncertainty, an intuitionistic fuzzy linguistic term is employed to describe medical features. The solution of classification is obtained by a similarity measurement. The advantages of the novel classifier proposed here are drawn out by comparing the same medical example under the methods of intuitionistic fuzzy sets (IFSs) and intuitionistic fuzzy cross-entropy (IFCE) with different score functions. Cross verification experiments are also taken to further test the classification ability of the GFCMNN multidimensional classifier. All of these experimental results show the effectiveness of the proposed GFCMNN multidimensional classifier and point out that it can assist in supporting for correct medical diagnoses associated with multiple categories. PMID:27298619

  7. Transionospheric chirp event classifier

    SciTech Connect

    Argo, P.E.; Fitzgerald, T.J.; Freeman, M.J.

    1995-09-01

    In this paper we will discuss a project designed to provide computer recognition of the transionospheric chirps/pulses measured by the Blackbeard (BB) satellite, and expected to be measured by the upcoming FORTE satellite. The Blackbeard data has been perused by human means -- this has been satisfactory for the relatively small amount of data taken by Blackbeard. But with the advent of the FORTE system, which by some accounts might ``see`` thousands of events per day, it is important to provide a software/hardware method of accurately analyzing the data. In fact, we are providing an onboard DSP system for FORTE, which will test the usefulness of our Event Classifier techniques in situ. At present we are constrained to work with data from the Blackbeard satellite, and will discuss the progress made to date.

  8. Transionospheric chirp event classifier

    NASA Astrophysics Data System (ADS)

    Argo, P. E.; Fitzgerald, T. J.; Freeman, M. J.

    In this paper we will discuss a project designed to provide computer recognition of the transionospheric chirps/pulses measured by the Blackbeard (BB) satellite, and expected to be measured by the upcoming FORTE satellite. The Blackbeard data has been perused by human means - this has been satisfactory for the relatively small amount of data taken by Blackbeard. But with the advent of the FORTE system, which by some accounts might 'see' thousands of events per day, it is important to provide a software/hardware method of accurately analyzing the data. In fact, we are providing an onboard DSP system for FORTE, which will test the usefulness of our Event Classifier techniques in situ. At present we are constrained to work with data from the Blackbeard satellite, and will discuss the progress made to date.

  9. Ranked Multi-Label Rules Associative Classifier

    NASA Astrophysics Data System (ADS)

    Thabtah, Fadi

    Associative classification is a promising approach in data mining, which integrates association rule discovery and classification. In this paper, we present a novel associative classification technique called Ranked Multilabel Rule (RMR) that derives rules with multiple class labels. Rules derived by current associative classification algorithms overlap in their training data records, resulting in many redundant and useless rules. However, RMR removes the overlapping between rules using a pruning heuristic and ensures that rules in the final classifier do not share training records, resulting in more accurate classifiers. Experimental results obtained on twenty data sets show that the classifiers produced by RMR are highly competitive if compared with those generated by decision trees and other popular associative techniques such as CBA, with respect to prediction accuracy.

  10. On Asymmetric Classifier Training for Detector Cascades

    SciTech Connect

    Gee, Timothy Felix

    2006-01-01

    This paper examines the Asymmetric AdaBoost algorithm introduced by Viola and Jones for cascaded face detection. The Viola and Jones face detector uses cascaded classifiers to successively filter, or reject, non-faces. In this approach most non-faces are easily rejected by the earlier classifiers in the cascade, thus reducing the overall number of computations. This requires earlier cascade classifiers to very seldomly reject true instances of faces. To reflect this training goal, Viola and Jones introduce a weighting parameter for AdaBoost iterations and show it enforces a desirable bound. During their implementation, a modification to the proposed weighting was introduced, while enforcing the same bound. The goal of this paper is to examine their asymmetric weighting by putting AdaBoost in the form of Additive Regression as was done by Friedman, Hastie, and Tibshirani. The author believes this helps to explain the approach and adds another connection between AdaBoost and Additive Regression.

  11. 3D cerebral MR image segmentation using multiple-classifier system.

    PubMed

    Amiri, Saba; Movahedi, Mohammad Mehdi; Kazemi, Kamran; Parsaei, Hossein

    2017-03-01

    The three soft brain tissues white matter (WM), gray matter (GM), and cerebral spinal fluid (CSF) identified in a magnetic resonance (MR) image via image segmentation techniques can aid in structural and functional brain analysis, brain's anatomical structures measurement and visualization, neurodegenerative disorders diagnosis, and surgical planning and image-guided interventions, but only if obtained segmentation results are correct. This paper presents a multiple-classifier-based system for automatic brain tissue segmentation from cerebral MR images. The developed system categorizes each voxel of a given MR image as GM, WM, and CSF. The algorithm consists of preprocessing, feature extraction, and supervised classification steps. In the first step, intensity non-uniformity in a given MR image is corrected and then non-brain tissues such as skull, eyeballs, and skin are removed from the image. For each voxel, statistical features and non-statistical features were computed and used a feature vector representing the voxel. Three multilayer perceptron (MLP) neural networks trained using three different datasets were used as the base classifiers of the multiple-classifier system. The output of the base classifiers was fused using majority voting scheme. Evaluation of the proposed system was performed using Brainweb simulated MR images with different noise and intensity non-uniformity and internet brain segmentation repository (IBSR) real MR images. The quantitative assessment of the proposed method using Dice, Jaccard, and conformity coefficient metrics demonstrates improvement (around 5 % for CSF) in terms of accuracy as compared to single MLP classifier and the existing methods and tools such FSL-FAST and SPM. As accurately segmenting a MR image is of paramount importance for successfully promoting the clinical application of MR image segmentation techniques, the improvement obtained by using multiple-classifier-based system is encouraging.

  12. A simulation algorithm for ultrasound liver backscattered signals.

    PubMed

    Zatari, D; Botros, N; Dunn, F

    1995-11-01

    In this study, we present a simulation algorithm for the backscattered ultrasound signal from liver tissue. The algorithm simulates backscattered signals from normal liver and three different liver abnormalities. The performance of the algorithm has been tested by statistically comparing the simulated signals with corresponding signals obtained from a previous in vivo study. To verify that the simulated signals can be classified correctly we have applied a classification technique based on an artificial neural network. The acoustic features extracted from the spectrum over a 2.5 MHz bandwidth are the attenuation coefficient and the change of speed of sound with frequency (dispersion). Our results show that the algorithm performs satisfactorily. Further testing of the algorithm is conducted by the use of a data acquisition and analysis system designed by the authors, where several simulated signals are stored in memory chips and classified according to their abnormalities.

  13. Classification of Horse Gaits Using FCM-Based Neuro-Fuzzy Classifier from the Transformed Data Information of Inertial Sensor

    PubMed Central

    Lee, Jae-Neung; Lee, Myung-Won; Byeon, Yeong-Hyeon; Lee, Won-Sik; Kwak, Keun-Chang

    2016-01-01

    In this study, we classify four horse gaits (walk, sitting trot, rising trot, canter) of three breeds of horse (Jeju, Warmblood, and Thoroughbred) using a neuro-fuzzy classifier (NFC) of the Takagi-Sugeno-Kang (TSK) type from data information transformed by a wavelet packet (WP). The design of the NFC is accomplished by using a fuzzy c-means (FCM) clustering algorithm that can solve the problem of dimensionality increase due to the flexible scatter partitioning. For this purpose, we use the rider’s hip motion from the sensor information collected by inertial sensors as feature data for the classification of a horse’s gaits. Furthermore, we develop a coaching system under both real horse riding and simulator environments and propose a method for analyzing the rider’s motion. Using the results of the analysis, the rider can be coached in the correct motion corresponding to the classified gait. To construct a motion database, the data collected from 16 inertial sensors attached to a motion capture suit worn by one of the country’s top-level horse riding experts were used. Experiments using the original motion data and the transformed motion data were conducted to evaluate the classification performance using various classifiers. The experimental results revealed that the presented FCM-NFC showed a better accuracy performance (97.5%) than a neural network classifier (NNC), naive Bayesian classifier (NBC), and radial basis function network classifier (RBFNC) for the transformed motion data. PMID:27171098

  14. Classifying partner femicide.

    PubMed

    Dixon, Louise; Hamilton-Giachritsis, Catherine; Browne, Kevin

    2008-01-01

    The heterogeneity of domestic violent men has long been established. However, research has failed to examine this phenomenon among men committing the most severe form of domestic violence. This study aims to use a multidimensional approach to empirically construct a classification system of men who are incarcerated for the murder of their female partner based on the Holtzworth-Munroe and Stuart (1994) typology. Ninety men who had been convicted and imprisoned for the murder of their female partner or spouse in England were identified from two prison samples. A content dictionary defining offense and offender characteristics associated with two dimensions of psychopathology and criminality was developed. These variables were extracted from institutional records via content analysis and analyzed for thematic structure using multidimensional scaling procedures. The resultant framework classified 80% (n = 72) of the sample into three subgroups of men characterized by (a) low criminality/low psychopathology (15%), (b) moderate-high criminality/ high psychopathology (36%), and (c) high criminality/low-moderate psychopathology (49%). The latter two groups are akin to Holtzworth-Munroe and Stuart's (1994) generally violent/antisocial and dysphoric/borderline offender, respectively. The implications for intervention, developing consensus in research methodology across the field, and examining typologies of domestic violent men prospectively are discussed.

  15. Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level

    PubMed Central

    Arshad, Sannia; Rho, Seungmin

    2014-01-01

    We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes. PMID:25295302

  16. Reconfiguration-based implementation of SVM classifier on FPGA for Classifying Microarray data.

    PubMed

    Hussain, Hanaa M; Benkrid, Khaled; Seker, Huseyin

    2013-01-01

    Classifying Microarray data, which are of high dimensional nature, requires high computational power. Support Vector Machines-based classifier (SVM) is among the most common and successful classifiers used in the analysis of Microarray data but also requires high computational power due to its complex mathematical architecture. Implementing SVM on hardware exploits the parallelism available within the algorithm kernels to accelerate the classification of Microarray data. In this work, a flexible, dynamically and partially reconfigurable implementation of the SVM classifier on Field Programmable Gate Array (FPGA) is presented. The SVM architecture achieved up to 85× speed-up over equivalent general purpose processor (GPP) showing the capability of FPGAs in enhancing the performance of SVM-based analysis of Microarray data as well as future bioinformatics applications.

  17. Evaluation and Analysis of SEASAT-A Scanning Multichannel Microwave Radiometer (SSMR) Antenna Pattern Correction (APC) Algorithm. Sub-task 4: Interim Mode T Sub B Versus Cross and Nominal Mode T Sub B

    NASA Technical Reports Server (NTRS)

    Kitzis, J. L.; Kitzis, S. N.

    1979-01-01

    The brightness temperature data produced by the SMMR Antenna Pattern Correction algorithm are evaluated. The evaluation consists of: (1) a direct comparison of the outputs of the interim, cross, and nominal APC modes; (2) a refinement of the previously determined cos beta estimates; and (3) a comparison of the world brightness temperature (T sub B) map with actual SMMR measurements.

  18. Jitter Correction

    NASA Technical Reports Server (NTRS)

    Waegell, Mordecai J.; Palacios, David M.

    2011-01-01

    Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter

  19. New algorithm for efficient pattern recall using a static threshold with the Steinbuch Lernmatrix

    NASA Astrophysics Data System (ADS)

    Juan Carbajal Hernández, José; Sánchez Fernández, Luis P.

    2011-03-01

    An associative memory is a binary relationship between inputs and outputs, which is stored in an M matrix. The fundamental purpose of an associative memory is to recover correct output patterns from input patterns, which can be altered by additive, subtractive or combined noise. The Steinbuch Lernmatrix was the first associative memory developed in 1961, and is used as a pattern recognition classifier. However, a misclassification problem is presented when crossbar saturation occurs. A new algorithm that corrects the misclassification in the Lernmatrix is proposed in this work. The results of crossbar saturation with fundamental patterns demonstrate a better performance of pattern recalling using the new algorithm. Experiments with real data show a more efficient classifier when the algorithm is introduced in the original Lernmatrix. Therefore, the thresholded Lernmatrix memory emerges as a suitable and alternative classifier to be used in the developing pattern processing field.

  20. Correcting Evaluation Bias of Relational Classifiers with Network Cross Validation

    DTIC Science & Technology

    2010-01-01

    Computer Sciences (with a minor in Mathematical Statistics ) at the University of Wisconsin-Madison in 2001. Broadly speaking, Tina’s research interests...These methods specifically exploit the statistical dependencies among instances in order to improve classification accuracy. However, there has been...the complex link structure and attribute dependencies in relational data violate the assumptions of many conventional statistical tests and make it

  1. Does the traditional snakebite severity score correctly classify envenomated patients?

    PubMed Central

    Kang, Seungho; Moon, Jeongmi; Chun, Byeongjo

    2016-01-01

    Objective This study aims to help set domestic guidelines for administration of antivenom to envenomated patients after snakebites. Methods This retrospective observational case series comprised 128 patients with snake envenomation. The patients were divided into two groups according to the need for additional antivenom after the initial treatment based on the traditional snakebite severity grading scale. One group successfully recovered after the initial treatment and did not need any additional antivenom (n=85) and the other needed an additional administration of antivenom (n=43). Results The group requiring additional administration of antivenom showed a higher local effect score and a traditional snakebite severity grade at presentation, a shorter prothrombin and activated partial prothrombin time, a higher frequency of rhabdomyolysis and disseminated intravascular coagulopathy, and longer hospitalization than the group that did not need additional antivenom. The most common cause for additional administration was the progression of local symptoms. The independent factor that was associated with the need for additional antivenom was the local effect pain score (odds ratio, 2.477; 95% confidence interval, 1.309 to 4.689). The optimal cut-off value of the local effect pain score was 1.5 with 62.8% sensitivity and 71.8% specificity. Conclusion When treating patients who are envenomated by a snake, and when using the traditional snakebite severity scale, the local effect pain score should be taken into account. If the score is more than 2, additional antivenom should be considered and the patient should be frequently assessed. PMID:27752613

  2. Combining classifiers for HIV-1 drug resistance prediction.

    PubMed

    Srisawat, Anantaporn; Kijsirikul, Boonserm

    2008-01-01

    This paper applies and studies the behavior of three learning algorithms, i.e. the Support Vector machine (SVM), the Radial Basis Function Network (the RBF network), and k-Nearest Neighbor (k-NN) for predicting HIV-1 drug resistance from genotype data. In addition, a new algorithm for classifier combination is proposed. The results of comparing the predictive performance of three learning algorithms show that, SVM yields the highest average accuracy, the RBF network gives the highest sensitivity, and k-NN yields the best in specificity. Finally, the comparison of the predictive performance of the composite classifier with three learning algorithms demonstrates that the proposed composite classifier provides the highest average accuracy.

  3. Emergent behaviors of classifier systems

    SciTech Connect

    Forrest, S.; Miller, J.H.

    1989-01-01

    This paper discusses some examples of emergent behavior in classifier systems, describes some recently developed methods for studying them based on dynamical systems theory, and presents some initial results produced by the methodology. The goal of this work is to find techniques for noticing when interesting emergent behaviors of classifier systems emerge, to study how such behaviors might emerge over time, and make suggestions for designing classifier systems that exhibit preferred behaviors. 20 refs., 1 fig.

  4. Chlorophyll-a concentration estimation with three bio-optical algorithms: correction for the low concentration range for the Yiam Reservoir, Korea

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Bio-optical algorithms have been applied to monitor water quality in surface water systems. Empirical algorithms, such as Ritchie (2008), Gons (2008), and Gilerson (2010), have been applied to estimate the chlorophyll-a (chl-a) concentrations. However, the performance of each algorithm severely degr...

  5. Visual Classifier Training for Text Document Retrieval.

    PubMed

    Heimerl, F; Koch, S; Bosch, H; Ertl, T

    2012-12-01

    Performing exhaustive searches over a large number of text documents can be tedious, since it is very hard to formulate search queries or define filter criteria that capture an analyst's information need adequately. Classification through machine learning has the potential to improve search and filter tasks encompassing either complex or very specific information needs, individually. Unfortunately, analysts who are knowledgeable in their field are typically not machine learning specialists. Most classification methods, however, require a certain expertise regarding their parametrization to achieve good results. Supervised machine learning algorithms, in contrast, rely on labeled data, which can be provided by analysts. However, the effort for labeling can be very high, which shifts the problem from composing complex queries or defining accurate filters to another laborious task, in addition to the need for judging the trained classifier's quality. We therefore compare three approaches for interactive classifier training in a user study. All of the approaches are potential candidates for the integration into a larger retrieval system. They incorporate active learning to various degrees in order to reduce the labeling effort as well as to increase effectiveness. Two of them encompass interactive visualization for letting users explore the status of the classifier in context of the labeled documents, as well as for judging the quality of the classifier in iterative feedback loops. We see our work as a step towards introducing user controlled classification methods in addition to text search and filtering for increasing recall in analytics scenarios involving large corpora.

  6. Bayesian classifiers applied to the Tennessee Eastman process.

    PubMed

    Dos Santos, Edimilson Batista; Ebecken, Nelson F F; Hruschka, Estevam R; Elkamel, Ali; Madhuranthakam, Chandra M R

    2014-03-01

    Fault diagnosis includes the main task of classification. Bayesian networks (BNs) present several advantages in the classification task, and previous works have suggested their use as classifiers. Because a classifier is often only one part of a larger decision process, this article proposes, for industrial process diagnosis, the use of a Bayesian method called dynamic Markov blanket classifier that has as its main goal the induction of accurate Bayesian classifiers having dependable probability estimates and revealing actual relationships among the most relevant variables. In addition, a new method, named variable ordering multiple offspring sampling capable of inducing a BN to be used as a classifier, is presented. The performance of these methods is assessed on the data of a benchmark problem known as the Tennessee Eastman process. The obtained results are compared with naive Bayes and tree augmented network classifiers, and confirm that both proposed algorithms can provide good classification accuracies as well as knowledge about relevant variables.

  7. A three-parameter model for classifying anurans into four genera based on advertisement calls.

    PubMed

    Gingras, Bruno; Fitch, William Tecumseh

    2013-01-01

    The vocalizations of anurans are innate in structure and may therefore contain indicators of phylogenetic history. Thus, advertisement calls of species which are more closely related phylogenetically are predicted to be more similar than those of distant species. This hypothesis was evaluated by comparing several widely used machine-learning algorithms. Recordings of advertisement calls from 142 species belonging to four genera were analyzed. A logistic regression model, using mean values for dominant frequency, coefficient of variation of root-mean square energy, and spectral flux, correctly classified advertisement calls with regard to genus with an accuracy above 70%. Similar accuracy rates were obtained using these parameters with a support vector machine model, a K-nearest neighbor algorithm, and a multivariate Gaussian distribution classifier, whereas a Gaussian mixture model performed slightly worse. In contrast, models based on mel-frequency cepstral coefficients did not fare as well. Comparable accuracy levels were obtained on out-of-sample recordings from 52 of the 142 original species. The results suggest that a combination of low-level acoustic attributes is sufficient to discriminate efficiently between the vocalizations of these four genera, thus supporting the initial premise and validating the use of high-throughput algorithms on animal vocalizations to evaluate phylogenetic hypotheses.

  8. Classifying bed inclination using pressure images.

    PubMed

    Baran Pouyan, M; Ostadabbas, S; Nourani, M; Pompeo, M

    2014-01-01

    Pressure ulcer is one of the most prevalent problems for bed-bound patients in hospitals and nursing homes. Pressure ulcers are painful for patients and costly for healthcare systems. Accurate in-bed posture analysis can significantly help in preventing pressure ulcers. Specifically, bed inclination (back angle) is a factor contributing to pressure ulcer development. In this paper, an efficient methodology is proposed to classify bed inclination. Our approach uses pressure values collected from a commercial pressure mat system. Then, by applying a number of image processing and machine learning techniques, the approximate degree of bed is estimated and classified. The proposed algorithm was tested on 15 subjects with various sizes and weights. The experimental results indicate that our method predicts bed inclination in three classes with 80.3% average accuracy.

  9. Sensitivity of Satellite-Based Skin Temperature to Different Surface Emissivity and NWP Reanalysis Sources Demonstrated Using a Single-Channel, Viewing-Angle-Corrected Retrieval Algorithm

    NASA Astrophysics Data System (ADS)

    Scarino, B. R.; Minnis, P.; Yost, C. R.; Chee, T.; Palikonda, R.

    2015-12-01

    Single-channel algorithms for satellite thermal-infrared- (TIR-) derived land and sea surface skin temperature (LST and SST) are advantageous in that they can be easily applied to a variety of satellite sensors. They can also accommodate decade-spanning instrument series, particularly for periods when split-window capabilities are not available. However, the benefit of one unified retrieval methodology for all sensors comes at the cost of critical sensitivity to surface emissivity (ɛs) and atmospheric transmittance estimation. It has been demonstrated that as little as 0.01 variance in ɛs can amount to more than a 0.5-K adjustment in retrieved LST values. Atmospheric transmittance requires calculations that employ vertical profiles of temperature and humidity from numerical weather prediction (NWP) models. Selection of a given NWP model can significantly affect LST and SST agreement relative to their respective validation sources. Thus, it is necessary to understand the accuracies of the retrievals for various NWP models to ensure the best LST/SST retrievals. The sensitivities of the single-channel retrievals to surface emittance and NWP profiles are investigated using NASA Langley historic land and ocean clear-sky skin temperature (Ts) values derived from high-resolution 11-μm TIR brightness temperature measured from geostationary satellites (GEOSat) and Advanced Very High Resolution Radiometers (AVHRR). It is shown that mean GEOSat-derived, anisotropy-corrected LST can vary by up to ±0.8 K depending on whether CERES or MODIS ɛs sources are used. Furthermore, the use of either NOAA Global Forecast System (GFS) or NASA Goddard Modern-Era Retrospective Analysis for Research and Applications (MERRA) for the radiative transfer model initial atmospheric state can account for more than 0.5-K variation in mean Ts. The results are compared to measurements from the Surface Radiation Budget Network (SURFRAD), an Atmospheric Radiation Measurement (ARM) Program ground

  10. The Effects of Observation of Learn Units during Reinforcement and Correction Conditions on the Rate of Learning Math Algorithms by Fifth Grade Students

    ERIC Educational Resources Information Center

    Neu, Jessica Adele

    2013-01-01

    I conducted two studies on the comparative effects of the observation of learn units during (a) reinforcement or (b) correction conditions on the acquisition of math objectives. The dependent variables were the within-session cumulative numbers of correct responses emitted during observational sessions. The independent variables were the…

  11. Pattern classifier for health monitoring of helicopter gearboxes

    NASA Technical Reports Server (NTRS)

    Chin, Hsinyung; Danai, Kourosh; Lewicki, David G.

    1993-01-01

    The application of a newly developed diagnostic method to a helicopter gearbox is demonstrated. This method is a pattern classifier which uses a multi-valued influence matrix (MVIM) as its diagnostic model. The method benefits from a fast learning algorithm, based on error feedback, that enables it to estimate gearbox health from a small set of measurement-fault data. The MVIM method can also assess the diagnosability of the system and variability of the fault signatures as the basis to improve fault signatures. This method was tested on vibration signals reflecting various faults in an OH-58A main rotor transmission gearbox. The vibration signals were then digitized and processed by a vibration signal analyzer to enhance and extract various features of the vibration data. The parameters obtained from this analyzer were utilized to train and test the performance of the MVIM method in both detection and diagnosis. The results indicate that the MVIM method provided excellent detection results when the full range of faults effects on the measurements were included in training, and it had a correct diagnostic rate of 95 percent when the faults were included in training.

  12. Pattern classifier for health monitoring of helicopter gearboxes

    NASA Astrophysics Data System (ADS)

    Chin, Hsinyung; Danai, Kourosh; Lewicki, David G.

    1993-04-01

    The application of a newly developed diagnostic method to a helicopter gearbox is demonstrated. This method is a pattern classifier which uses a multi-valued influence matrix (MVIM) as its diagnostic model. The method benefits from a fast learning algorithm, based on error feedback, that enables it to estimate gearbox health from a small set of measurement-fault data. The MVIM method can also assess the diagnosability of the system and variability of the fault signatures as the basis to improve fault signatures. This method was tested on vibration signals reflecting various faults in an OH-58A main rotor transmission gearbox. The vibration signals were then digitized and processed by a vibration signal analyzer to enhance and extract various features of the vibration data. The parameters obtained from this analyzer were utilized to train and test the performance of the MVIM method in both detection and diagnosis. The results indicate that the MVIM method provided excellent detection results when the full range of faults effects on the measurements were included in training, and it had a correct diagnostic rate of 95 percent when the faults were included in training.

  13. Classifying Chondrules Based on Cathodoluminesence

    NASA Astrophysics Data System (ADS)

    Cristarela, T. C.; Sears, D. W.

    2011-03-01

    Sears et al. (1991) proposed a scheme to classify chondrules based on cathodoluminesence color and electron microprobe analysis. This research evaluates that scheme and criticisms received from Grossman and Brearley (2005).

  14. Classifying Multi-year Land Use and Land Cover using Deep Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Seo, B.

    2015-12-01

    Cultivated ecosystems constitute a particularly frequent form of human land use. Long-term management of a cultivated ecosystem requires us to know temporal change of land use and land cover (LULC) of the target system. Land use and land cover changes (LUCC) in agricultural ecosystem is often rapid and unexpectedly occurs. Thus, longitudinal LULC is particularly needed to examine trends of ecosystem functions and ecosystem services of the target system. Multi-temporal classification of land use and land cover (LULC) in complex heterogeneous landscape remains a challenge. Agricultural landscapes often made up of a mosaic of numerous LULC classes, thus spatial heterogeneity is large. Moreover, temporal and spatial variation within a LULC class is also large. Under such a circumstance, standard classifiers would fail to identify the LULC classes correctly due to the heterogeneity of the target LULC classes. Because most standard classifiers search for a specific pattern of features for a class, they fail to detect classes with noisy and/or transformed feature data sets. Recently, deep learning algorithms have emerged in the machine learning communities and shown superior performance on a variety of tasks, including image classification and object recognition. In this paper, we propose to use convolutional neural networks (CNN) to learn from multi-spectral data to classify agricultural LULC types. Based on multi-spectral satellite data, we attempted to classify agricultural LULC classes in Soyang watershed, South Korea for the three years' study period (2009-2011). The classification performance of support vector machine (SVM) and CNN classifiers were compared for different years. Preliminary results demonstrate that the proposed method can improve classification performance compared to the SVM classifier. The SVM classifier failed to identify classes when trained on a year to predict another year, whilst CNN could reconstruct LULC maps of the catchment over the study

  15. Moving Away From Error-Related Potentials to Achieve Spelling Correction in P300 Spellers

    PubMed Central

    Mainsah, Boyla O.; Morton, Kenneth D.; Collins, Leslie M.; Sellers, Eric W.; Throckmorton, Chandra S.

    2016-01-01

    P300 spellers can provide a means of communication for individuals with severe neuromuscular limitations. However, its use as an effective communication tool is reliant on high P300 classification accuracies (>70%) to account for error revisions. Error-related potentials (ErrP), which are changes in EEG potentials when a person is aware of or perceives erroneous behavior or feedback, have been proposed as inputs to drive corrective mechanisms that veto erroneous actions by BCI systems. The goal of this study is to demonstrate that training an additional ErrP classifier for a P300 speller is not necessary, as we hypothesize that error information is encoded in the P300 classifier responses used for character selection. We perform offline simulations of P300 spelling to compare ErrP and non-ErrP based corrective algorithms. A simple dictionary correction based on string matching and word frequency significantly improved accuracy (35–185%), in contrast to an ErrP-based method that flagged, deleted and replaced erroneous characters (−47 – 0%). Providing additional information about the likelihood of characters to a dictionary-based correction further improves accuracy. Our Bayesian dictionary-based correction algorithm that utilizes P300 classifier confidences performed comparably (44–416%) to an oracle ErrP dictionary-based method that assumed perfect ErrP classification (43–433%). PMID:25438320

  16. Adaptive classifier for steel strip surface defects

    NASA Astrophysics Data System (ADS)

    Jiang, Mingming; Li, Guangyao; Xie, Li; Xiao, Mang; Yi, Li

    2017-01-01

    Surface defects detection system has been receiving increased attention as its precision, speed and less cost. One of the most challenges is reacting to accuracy deterioration with time as aged equipment and changed processes. These variables will make a tiny change to the real world model but a big impact on the classification result. In this paper, we propose a new adaptive classifier with a Bayes kernel (BYEC) which update the model with small sample to it adaptive for accuracy deterioration. Firstly, abundant features were introduced to cover lots of information about the defects. Secondly, we constructed a series of SVMs with the random subspace of the features. Then, a Bayes classifier was trained as an evolutionary kernel to fuse the results from base SVMs. Finally, we proposed the method to update the Bayes evolutionary kernel. The proposed algorithm is experimentally compared with different algorithms, experimental results demonstrate that the proposed method can be updated with small sample and fit the changed model well. Robustness, low requirement for samples and adaptive is presented in the experiment.

  17. IAEA safeguards and classified materials

    SciTech Connect

    Pilat, J.F.; Eccleston, G.W.; Fearey, B.L.; Nicholas, N.J.; Tape, J.W.; Kratzer, M.

    1997-11-01

    The international community in the post-Cold War period has suggested that the International Atomic Energy Agency (IAEA) utilize its expertise in support of the arms control and disarmament process in unprecedented ways. The pledges of the US and Russian presidents to place excess defense materials, some of which are classified, under some type of international inspections raises the prospect of using IAEA safeguards approaches for monitoring classified materials. A traditional safeguards approach, based on nuclear material accountancy, would seem unavoidably to reveal classified information. However, further analysis of the IAEA`s safeguards approaches is warranted in order to understand fully the scope and nature of any problems. The issues are complex and difficult, and it is expected that common technical understandings will be essential for their resolution. Accordingly, this paper examines and compares traditional safeguards item accounting of fuel at a nuclear power station (especially spent fuel) with the challenges presented by inspections of classified materials. This analysis is intended to delineate more clearly the problems as well as reveal possible approaches, techniques, and technologies that could allow the adaptation of safeguards to the unprecedented task of inspecting classified materials. It is also hoped that a discussion of these issues can advance ongoing political-technical debates on international inspections of excess classified materials.

  18. Innovative use of DSP technology in space: FORTE event classifier

    SciTech Connect

    Briles, S.; Moore, K. Jones, R.; Klingner, P.; Neagley, D.; Caffrey, M.; Henneke, K.; Spurgen, W.; Blain, P.

    1994-08-01

    The Fast On-Orbit Recording of Transient Events (FORTE) small satellite will field a digital signal processor (DSP) experiment for the purpose of classifying radio-frequency (rf) transient signals propagating through the earth`s ionosphere. Designated the Event Classifier experiment, this DSP experiment uses a single Texas Instruments` SMJ320C30 DSP to execute preprocessing, feature extraction, and classification algorithms on down-converted, digitized, and buffered rf transient signals in the frequency range of 30 to 300 MHz. A radiation-hardened microcontroller monitors DSP- abnormalities and supervises spacecraft command communications. On- orbit evaluation of multiple algorithms is supported by the Event Classifier architecture. Ground-based commands determine the subset and sequence of algorithms executed to classify a captured time series. Conventional neural network classification algorithms will be some of the classification techniques implemented on-board FORTE while in a low-earth orbit. Results of all experiments, after being stored in DSP flash memory, will be transmitted through the spacecraft to ground stations. The Event Classifier is a versatile and fault-tolerant experiment that is an important new space-based application of DSP technology.

  19. An ensemble of dissimilarity based classifiers for Mackerel gender determination

    NASA Astrophysics Data System (ADS)

    Blanco, A.; Rodriguez, R.; Martinez-Maranon, I.

    2014-03-01

    Mackerel is an infravalored fish captured by European fishing vessels. A manner to add value to this specie can be achieved by trying to classify it attending to its sex. Colour measurements were performed on Mackerel females and males (fresh and defrozen) extracted gonads to obtain differences between sexes. Several linear and non linear classifiers such as Support Vector Machines (SVM), k Nearest Neighbors (k-NN) or Diagonal Linear Discriminant Analysis (DLDA) can been applied to this problem. However, theyare usually based on Euclidean distances that fail to reflect accurately the sample proximities. Classifiers based on non-Euclidean dissimilarities misclassify a different set of patterns. We combine different kind of dissimilarity based classifiers. The diversity is induced considering a set of complementary dissimilarities for each model. The experimental results suggest that our algorithm helps to improve classifiers based on a single dissimilarity.

  20. Automatic speech recognition using a predictive echo state network classifier.

    PubMed

    Skowronski, Mark D; Harris, John G

    2007-04-01

    We have combined an echo state network (ESN) with a competitive state machine framework to create a classification engine called the predictive ESN classifier. We derive the expressions for training the predictive ESN classifier and show that the model was significantly more noise robust compared to a hidden Markov model in noisy speech classification experiments by 8+/-1 dB signal-to-noise ratio. The simple training algorithm and noise robustness of the predictive ESN classifier make it an attractive classification engine for automatic speech recognition.

  1. Bayes classifiers for imbalanced traffic accidents datasets.

    PubMed

    Mujalli, Randa Oqab; López, Griselda; Garach, Laura

    2016-03-01

    Traffic accidents data sets are usually imbalanced, where the number of instances classified under the killed or severe injuries class (minority) is much lower than those classified under the slight injuries class (majority). This, however, supposes a challenging problem for classification algorithms and may cause obtaining a model that well cover the slight injuries instances whereas the killed or severe injuries instances are misclassified frequently. Based on traffic accidents data collected on urban and suburban roads in Jordan for three years (2009-2011); three different data balancing techniques were used: under-sampling which removes some instances of the majority class, oversampling which creates new instances of the minority class and a mix technique that combines both. In addition, different Bayes classifiers were compared for the different imbalanced and balanced data sets: Averaged One-Dependence Estimators, Weightily Average One-Dependence Estimators, and Bayesian networks in order to identify factors that affect the severity of an accident. The results indicated that using the balanced data sets, especially those created using oversampling techniques, with Bayesian networks improved classifying a traffic accident according to its severity and reduced the misclassification of killed and severe injuries instances. On the other hand, the following variables were found to contribute to the occurrence of a killed causality or a severe injury in a traffic accident: number of vehicles involved, accident pattern, number of directions, accident type, lighting, surface condition, and speed limit. This work, to the knowledge of the authors, is the first that aims at analyzing historical data records for traffic accidents occurring in Jordan and the first to apply balancing techniques to analyze injury severity of traffic accidents.

  2. Building classifiers using Bayesian networks

    SciTech Connect

    Friedman, N.; Goldszmidt, M.

    1996-12-31

    Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state of the art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we examine and evaluate approaches for inducing classifiers from data, based on recent results in the theory of learning Bayesian networks. Bayesian networks are factored representations of probability distributions that generalize the naive Bayes classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness which are characteristic of naive Bayes. We experimentally tested these approaches using benchmark problems from the U. C. Irvine repository, and compared them against C4.5, naive Bayes, and wrapper-based feature selection methods.

  3. Prediction of phosphorylation sites based on the integration of multiple classifiers.

    PubMed

    Han, R Z; Wang, D; Chen, Y H; Dong, L K; Fan, Y L

    2017-02-23

    Phosphorylation is an important part of post-translational modifications of proteins, and is essential for many biological activities. Phosphorylation and dephosphorylation can regulate signal transduction, gene expression, and cell cycle regulation in many cellular processes. Phosphorylation is extremely important for both basic research and drug discovery to rapidly and correctly identify the attributes of a new protein phosphorylation sites. Moreover, abnormal phosphorylation can be used as a key medical feature related to a disease in some cases. The using of computational methods could improve the accuracy of detection of phosphorylation sites, which can provide predictive guidance for the prevention of the occurrence and/or the best course of treatment for certain diseases. Furthermore, this approach can effectively reduce the costs of biological experiments. In this study, a flexible neural tree (FNT), particle swarm optimization, and support vector machine algorithms were used to classify data with secondary encoding according to the physical and chemical properties of amino acids for feature extraction. Comparison of the classification results obtained from the three classifiers showed that the classification of the FNT was the best. The three classifiers were then integrated in the form of a minority subordinate to the majority vote to obtain the results. The performance of the integrated model showed improvement in sensitivity (87.41%), specificity (87.60%), and accuracy (87.50%).

  4. Learning to classify species with barcodes

    PubMed Central

    Bertolazzi, Paola; Felici, Giovanni; Weitschek, Emanuel

    2009-01-01

    Background According to many field experts, specimens classification based on morphological keys needs to be supported with automated techniques based on the analysis of DNA fragments. The most successful results in this area are those obtained from a particular fragment of mitochondrial DNA, the gene cytochrome c oxidase I (COI) (the "barcode"). Since 2004 the Consortium for the Barcode of Life (CBOL) promotes the collection of barcode specimens and the development of methods to analyze the barcode for several tasks, among which the identification of rules to correctly classify an individual into its species by reading its barcode. Results We adopt a Logic Mining method based on two optimization models and present the results obtained on two datasets where a number of COI fragments are used to describe the individuals that belong to different species. The method proposed exhibits high correct recognition rates on a training-testing split of the available data using a small proportion of the information available (e.g., correct recognition approx. 97% when only 20 sites of the 648 available are used). The method is able to provide compact formulas on the values (A, C, G, T) at the selected sites that synthesize the characteristic of each species, a relevant information for taxonomists. Conclusion We have presented a Logic Mining technique designed to analyze barcode data and to provide detailed output of interest to the taxonomists and the barcode community represented in the CBOL Consortium. The method has proven to be effective, efficient and precise. PMID:19900303

  5. Scoring and Classifying Examinees Using Measurement Decision Theory

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.

    2009-01-01

    This paper describes and evaluates the use of measurement decision theory (MDT) to classify examinees based on their item response patterns. The model has a simple framework that starts with the conditional probabilities of examinees in each category or mastery state responding correctly to each item. The presented evaluation investigates: (1) the…

  6. Classifying Cereal Data (Earlier Methods)

    Cancer.gov

    The DSQ includes questions about cereal intake and allows respondents up to two responses on which cereals they consume. We classified each cereal reported first by hot or cold, and then along four dimensions: density of added sugars, whole grains, fiber, and calcium.

  7. INSENS classification algorithm report

    SciTech Connect

    Hernandez, J.E.; Frerking, C.J.; Myers, D.W.

    1993-07-28

    This report describes a new algorithm developed for the Imigration and Naturalization Service (INS) in support of the INSENS project for classifying vehicles and pedestrians using seismic data. This algorithm is less sensitive to nuisance alarms due to environmental events than the previous algorithm. Furthermore, the algorithm is simple enough that it can be implemented in the 8-bit microprocessor used in the INSENS system.

  8. 76 FR 34761 - Classified National Security Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-14

    ... Classified National Security Information AGENCY: Marine Mammal Commission. ACTION: Notice. SUMMARY: This... information, as directed by Information Security Oversight Office regulations. FOR FURTHER INFORMATION CONTACT..., ``Classified National Security Information,'' and 32 CFR part 2001, ``Classified National Security...

  9. Comparing Bayesian neural network algorithms for classifying segmented outdoor images.

    PubMed

    Vivarelli, F; Williams, C K

    2001-05-01

    In this paper we investigate the Bayesian training of neural networks for region labelling of segmented outdoor scenes; the data are drawn from the Sowerby Image Database of British Aerospace. Neural networks are trained with two Bayesian methods, (i) the evidence framework of MacKay (1992a,b) and (ii) a Markov Chain Monte Carlo method due to Neal (1996). The performance of the two methods is compared to evaluating the empirical learning curves of neural networks trained with the two methods. We also investigate the use of the Automatic Relevance Determination method for input feature selection.

  10. RECIPES FOR WRITING ALGORITHMS FOR ATMOSPHERIC CORRECTIONS AND TEMPERATURE/EMISSIVITY SEPARATIONS IN THE THERMAL REGIME FOR A MULTI-SPECTRAL SENSOR

    SciTech Connect

    C. BOREL; W. CLODIUS

    2001-04-01

    This paper discusses the algorithms created for the Multi-spectral Thermal Imager (MTI) to retrieve temperatures and emissivities. Recipes to create the physics based water temperature retrieval, emissivity of water surfaces are described. A simple radiative transfer model for multi-spectral sensors is developed. A method to create look-up-tables and the criterion of finding the optimum water temperature are covered. Practical aspects such as conversion from band-averaged radiances to brightness temperatures and effects of variations in the spectral response on the atmospheric transmission are discussed. A recipe for a temperature/emissivity separation algorithm when water surfaces are present is given. Results of retrievals of skin water temperatures are compared with in-situ measurements of the bulk water temperature at two locations are shown.

  11. Classifying seismic waveforms from scratch: a case study in the alpine environment

    NASA Astrophysics Data System (ADS)

    Hammer, C.; Ohrnberger, M.; Fäh, D.

    2013-01-01

    Nowadays, an increasing amount of seismic data is collected by daily observatory routines. The basic step for successfully analyzing those data is the correct detection of various event types. However, the visually scanning process is a time-consuming task. Applying standard techniques for detection like the STA/LTA trigger still requires the manual control for classification. Here, we present a useful alternative. The incoming data stream is scanned automatically for events of interest. A stochastic classifier, called hidden Markov model, is learned for each class of interest enabling the recognition of highly variable waveforms. In contrast to other automatic techniques as neural networks or support vector machines the algorithm allows to start the classification from scratch as soon as interesting events are identified. Neither the tedious process of collecting training samples nor a time-consuming configuration of the classifier is required. An approach originally introduced for the volcanic task force action allows to learn classifier properties from a single waveform example and some hours of background recording. Besides a reduction of required workload this also enables to detect very rare events. Especially the latter feature provides a milestone point for the use of seismic devices in alpine warning systems. Furthermore, the system offers the opportunity to flag new signal classes that have not been defined before. We demonstrate the application of the classification system using a data set from the Swiss Seismological Survey achieving very high recognition rates. In detail we document all refinements of the classifier providing a step-by-step guide for the fast set up of a well-working classification system.

  12. Classifying auroras using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Rydesater, Peter; Brandstrom, Urban; Steen, Ake; Gustavsson, Bjorn

    1999-03-01

    In Auroral Large Imaging System (ALIS) there is need of stable methods for analysis and classification of auroral images and images with for example mother of pearl clouds. This part of ALIS is called Selective Imaging Techniques (SIT) and is intended to sort out images of scientific interest. It's also used to find out what and where in the images there is for example different auroral phenomena's. We will discuss some about the SIT units main functionality but this work is mainly concentrated on how to find auroral arcs and how they are placed in images. Special case have been taken to make the algorithm robust since it's going to be implemented in a SIT unit which will work automatic and often unsupervised and some extends control the data taking of ALIS. The method for finding auroral arcs is based on a local operator that detects intensity differens. This gives arc orientation values as a preprocessing which is fed to a neural network classifier. We will show some preliminary results and possibilities to use and improve this algorithm for use in the future SIT unit.

  13. Application of SVM classifier in thermographic image classification for early detection of breast cancer

    NASA Astrophysics Data System (ADS)

    Oleszkiewicz, Witold; Cichosz, Paweł; Jagodziński, Dariusz; Matysiewicz, Mateusz; Neumann, Łukasz; Nowak, Robert M.; Okuniewski, Rafał

    2016-09-01

    This article presents the application of machine learning algorithms for early detection of breast cancer on the basis of thermographic images. Supervised learning model: Support vector machine (SVM) and Sequential Minimal Optimization algorithm (SMO) for the training of SVM classifier were implemented. The SVM classifier was included in a client-server application which enables to create a training set of examinations and to apply classifiers (including SVM) for the diagnosis and early detection of the breast cancer. The sensitivity and specificity of SVM classifier were calculated based on the thermographic images from studies. Furthermore, the heuristic method for SVM's parameters tuning was proposed.

  14. Robust Algorithm for Systematic Classification of Malaria Late Treatment Failures as Recrudescence or Reinfection Using Microsatellite Genotyping.

    PubMed

    Plucinski, Mateusz M; Morton, Lindsay; Bushman, Mary; Dimbu, Pedro Rafael; Udhayakumar, Venkatachalam

    2015-10-01

    Routine therapeutic efficacy monitoring to measure the response to antimalarial treatment is a cornerstone of malaria control. To correctly measure drug efficacy, therapeutic efficacy studies require genotyping parasites from late treatment failures to differentiate between recrudescent infections and reinfections. However, there is a lack of statistical methods to systematically classify late treatment failures from genotyping data. A Bayesian algorithm was developed to estimate the posterior probability of late treatment failure being the result of a recrudescent infection from microsatellite genotyping data. The algorithm was implemented using a Monte Carlo Markov chain approach and was used to classify late treatment failures using published microsatellite data from therapeutic efficacy studies in Ethiopia and Angola. The algorithm classified 85% of the Ethiopian and 95% of the Angolan late treatment failures as either likely reinfection or likely recrudescence, defined as a posterior probability of recrudescence of <0.1 or >0.9, respectively. The adjusted efficacies calculated using the new algorithm differed from efficacies estimated using commonly used methods for differentiating recrudescence from reinfection. In a high-transmission setting such as Angola, as few as 15 samples needed to be genotyped in order to have enough power to correctly classify treatment failures. Analysis of microsatellite genotyping data for differentiating between recrudescence and reinfection benefits from an approach that both systematically classifies late treatment failures and estimates the uncertainty of these classifications. Researchers analyzing genotyping data from antimalarial therapeutic efficacy monitoring are urged to publish their raw genetic data and to estimate the uncertainty around their classification.

  15. Classifier-Guided Sampling for Complex Energy System Optimization

    SciTech Connect

    Backlund, Peter B.; Eddy, John P.

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  16. Atmospheric Correction of Ocean Color Imagery: Test of the Spectral Optimization Algorithm with the Sea-Viewing Wide Field-of-View Sensor.

    PubMed

    Chomko, R M; Gordon, H R

    2001-06-20

    We implemented the spectral optimization algorithm [SOA; Appl. Opt. 37, 5560 (1998)] in an image-processing environment and tested it with Sea-viewing Wide Field-of-View Sensor (SeaWiFS) imagery from the Middle Atlantic Bight and the Sargasso Sea. We compared the SOA and the standard SeaWiFS algorithm on two days that had significantly different atmospheric turbidities but, because of the location and time of the year, nearly the same water properties. The SOA-derived pigment concentration showed excellent continuity over the two days, with the relative difference in pigments exceeding 10% only in regions that are characteristic of high advection. The continuity in the derived water-leaving radiances at 443 and 555 nm was also within ~10%. There was no obvious correlation between the relative differences in pigments and the aerosol concentration. In contrast, standard processing showed poor continuity in derived pigments over the two days, with the relative differences correlating strongly with atmospheric turbidity. SOA-derived atmospheric parameters suggested that the retrieved ocean and atmospheric reflectances were decoupled on the more turbid day. On the clearer day, for which the aerosol concentration was so low that relatively large changes in aerosol properties resulted in only small changes in aerosol reflectance, water patterns were evident in the aerosol properties. This result implies that SOA-derived atmospheric parameters cannot be accurate in extremely clear atmospheres.

  17. Steganalysis in high dimensions: fusing classifiers built on random subspaces

    NASA Astrophysics Data System (ADS)

    Kodovský, Jan; Fridrich, Jessica

    2011-02-01

    By working with high-dimensional representations of covers, modern steganographic methods are capable of preserving a large number of complex dependencies among individual cover elements and thus avoid detection using current best steganalyzers. Inevitably, steganalysis needs to start using high-dimensional feature sets as well. This brings two key problems - construction of good high-dimensional features and machine learning that scales well with respect to dimensionality. Depending on the classifier, high dimensionality may lead to problems with the lack of training data, infeasibly high complexity of training, degradation of generalization abilities, lack of robustness to cover source, and saturation of performance below its potential. To address these problems collectively known as the curse of dimensionality, we propose ensemble classifiers as an alternative to the much more complex support vector machines. Based on the character of the media being analyzed, the steganalyst first puts together a high-dimensional set of diverse "prefeatures" selected to capture dependencies among individual cover elements. Then, a family of weak classifiers is built on random subspaces of the prefeature space. The final classifier is constructed by fusing the decisions of individual classifiers. The advantage of this approach is its universality, low complexity, simplicity, and improved performance when compared to classifiers trained on the entire prefeature set. Experiments with the steganographic algorithms nsF5 and HUGO demonstrate the usefulness of this approach over current state of the art.

  18. Classifier dependent feature preprocessing methods

    NASA Astrophysics Data System (ADS)

    Rodriguez, Benjamin M., II; Peterson, Gilbert L.

    2008-04-01

    In mobile applications, computational complexity is an issue that limits sophisticated algorithms from being implemented on these devices. This paper provides an initial solution to applying pattern recognition systems on mobile devices by combining existing preprocessing algorithms for recognition. In pattern recognition systems, it is essential to properly apply feature preprocessing tools prior to training classification models in an attempt to reduce computational complexity and improve the overall classification accuracy. The feature preprocessing tools extended for the mobile environment are feature ranking, feature extraction, data preparation and outlier removal. Most desktop systems today are capable of processing a majority of the available classification algorithms without concern of processing while the same is not true on mobile platforms. As an application of pattern recognition for mobile devices, the recognition system targets the problem of steganalysis, determining if an image contains hidden information. The measure of performance shows that feature preprocessing increases the overall steganalysis classification accuracy by an average of 22%. The methods in this paper are tested on a workstation and a Nokia 6620 (Symbian operating system) camera phone with similar results.

  19. Mining Multi-label Concept-Drifting Data Streams Using Dynamic Classifier Ensemble

    NASA Astrophysics Data System (ADS)

    Qu, Wei; Zhang, Yang; Zhu, Junping; Qiu, Qiang

    The problem of mining single-label data streams has been extensively studied in recent years. However, not enough attention has been paid to the problem of mining multi-label data streams. In this paper, we propose an improved binary relevance method to take advantage of dependence information among class labels, and propose a dynamic classifier ensemble approach for classifying multi-label concept-drifting data streams. The weighted majority voting strategy is used in our classification algorithm. Our empirical study on both synthetic data set and real-life data set shows that the proposed dynamic classifier ensemble with improved binary relevance approach outperforms dynamic classifier ensemble with binary relevance algorithm, and static classifier ensemble with binary relevance algorithm.

  20. Classifying sex biased congenital anomalies

    SciTech Connect

    Lubinsky, M.S.

    1997-03-31

    The reasons for sex biases in congenital anomalies that arise before structural or hormonal dimorphisms are established has long been unclear. A review of such disorders shows that patterning and tissue anomalies are female biased, and structural findings are more common in males. This suggests different gender dependent susceptibilities to developmental disturbances, with female vulnerabilities focused on early blastogenesis/determination, while males are more likely to involve later organogenesis/morphogenesis. A dual origin for some anomalies explains paradoxical reductions of sex biases with greater severity (i.e., multiple rather than single malformations), presumably as more severe events increase the involvement of an otherwise minor process with opposite biases to those of the primary mechanism. The cause for these sex differences is unknown, but early dimorphisms, such as differences in growth or presence of H-Y antigen, may be responsible. This model provides a useful rationale for understanding and classifying sex-biased congenital anomalies. 42 refs., 7 tabs.

  1. Object localization based on smoothing preprocessing and cascade classifier

    NASA Astrophysics Data System (ADS)

    Zhang, Xingfu; Liu, Lei; Zhao, Feng

    2017-01-01

    An improved algorithm for image location is proposed in this paper. Firstly, the image is smoothed and the partial noise is removed. Then use the cascade classifier to train a template. Finally, the template is used to detect the related images. The advantage of the algorithm is that it is robust to noise and the proportion of the image is not sensitive to change. At the same time, the algorithm also has the advantages of fast computation speed. In this paper, a real truck bottom picture is chosen as the experimental object. Images of normal components and faulty components are all included in the image sample. Experimental results show that the accuracy rate of the image is more than 90 percent when the grade is more than 40. So we can draw a conclusion that the algorithm proposed in this paper can be applied to the actual image localization project.

  2. Parallel processing implementations of a contextual classifier for multispectral remote sensing data

    NASA Technical Reports Server (NTRS)

    Siegel, H. J.; Swain, P. H.; Smith, B. W.

    1980-01-01

    Contextual classifiers are being developed as a method to exploit the spatial/spectral context of a pixel to achieve accurate classification. Classification algorithms such as the contextual classifier typically require large amounts of computation time. One way to reduce the execution time of these tasks is through the use of parallelism. The applicability of the CDC flexible processor system and of a proposed multimicroprocessor system (PASM) for implementing contextual classifiers is examined.

  3. Adaptive statistical pattern classifiers for remotely sensed data

    NASA Technical Reports Server (NTRS)

    Gonzalez, R. C.; Pace, M. O.; Raulston, H. S.

    1975-01-01

    A technique for the adaptive estimation of nonstationary statistics necessary for Bayesian classification is developed. The basic approach to the adaptive estimation procedure consists of two steps: (1) an optimal stochastic approximation of the parameters of interest and (2) a projection of the parameters in time or position. A divergence criterion is developed to monitor algorithm performance. Comparative results of adaptive and nonadaptive classifier tests are presented for simulated four dimensional spectral scan data.

  4. Algorithms and Algorithmic Languages.

    ERIC Educational Resources Information Center

    Veselov, V. M.; Koprov, V. M.

    This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…

  5. Multiple Classifier System for Remote Sensing Image Classification: A Review

    PubMed Central

    Du, Peijun; Xia, Junshi; Zhang, Wei; Tan, Kun; Liu, Yi; Liu, Sicong

    2012-01-01

    Over the last two decades, multiple classifier system (MCS) or classifier ensemble has shown great potential to improve the accuracy and reliability of remote sensing image classification. Although there are lots of literatures covering the MCS approaches, there is a lack of a comprehensive literature review which presents an overall architecture of the basic principles and trends behind the design of remote sensing classifier ensemble. Therefore, in order to give a reference point for MCS approaches, this paper attempts to explicitly review the remote sensing implementations of MCS and proposes some modified approaches. The effectiveness of existing and improved algorithms are analyzed and evaluated by multi-source remotely sensed images, including high spatial resolution image (QuickBird), hyperspectral image (OMISII) and multi-spectral image (Landsat ETM+). Experimental results demonstrate that MCS can effectively improve the accuracy and stability of remote sensing image classification, and diversity measures play an active role for the combination of multiple classifiers. Furthermore, this survey provides a roadmap to guide future research, algorithm enhancement and facilitate knowledge accumulation of MCS in remote sensing community. PMID:22666057

  6. Multiple classifier system for remote sensing image classification: a review.

    PubMed

    Du, Peijun; Xia, Junshi; Zhang, Wei; Tan, Kun; Liu, Yi; Liu, Sicong

    2012-01-01

    Over the last two decades, multiple classifier system (MCS) or classifier ensemble has shown great potential to improve the accuracy and reliability of remote sensing image classification. Although there are lots of literatures covering the MCS approaches, there is a lack of a comprehensive literature review which presents an overall architecture of the basic principles and trends behind the design of remote sensing classifier ensemble. Therefore, in order to give a reference point for MCS approaches, this paper attempts to explicitly review the remote sensing implementations of MCS and proposes some modified approaches. The effectiveness of existing and improved algorithms are analyzed and evaluated by multi-source remotely sensed images, including high spatial resolution image (QuickBird), hyperspectral image (OMISII) and multi-spectral image (Landsat ETM+). Experimental results demonstrate that MCS can effectively improve the accuracy and stability of remote sensing image classification, and diversity measures play an active role for the combination of multiple classifiers. Furthermore, this survey provides a roadmap to guide future research, algorithm enhancement and facilitate knowledge accumulation of MCS in remote sensing community.

  7. Comparison of artificial intelligence classifiers for SIP attack data

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Slachta, Jiri

    2016-05-01

    Honeypot application is a source of valuable data about attacks on the network. We run several SIP honeypots in various computer networks, which are separated geographically and logically. Each honeypot runs on public IP address and uses standard SIP PBX ports. All information gathered via honeypot is periodically sent to the centralized server. This server classifies all attack data by neural network algorithm. The paper describes optimizations of a neural network classifier, which lower the classification error. The article contains the comparison of two neural network algorithm used for the classification of validation data. The first is the original implementation of the neural network described in recent work; the second neural network uses further optimizations like input normalization or cross-entropy cost function. We also use other implementations of neural networks and machine learning classification algorithms. The comparison test their capabilities on validation data to find the optimal classifier. The article result shows promise for further development of an accurate SIP attack classification engine.

  8. GS-TEC: the Gaia spectrophotometry transient events classifier

    NASA Astrophysics Data System (ADS)

    Blagorodnova, Nadejda; Koposov, Sergey E.; Wyrzykowski, Łukasz; Irwin, Mike; Walton, Nicholas A.

    2014-07-01

    We present an algorithm for classifying the nearby transient objects detected by the Gaia satellite. The algorithm will use the low-resolution spectra from the blue and red spectrophotometers on board the satellite. Taking a Bayesian approach, we model the spectra using the newly constructed reference spectral library and literature-driven priors. We find that for magnitudes brighter than 19 in Gaia G magnitude, around 75 per cent of the transients will be robustly classified. The efficiency of the algorithm for Type Ia supernovae (SNe I) is higher than 80 per cent for magnitudes G ≤ 18, dropping to approximately 60 per cent at magnitude G = 19. For SNe II, the efficiency varies from 75 to 60 per cent for G ≤ 18, falling to 50 per cent at G = 19. The purity of our classifier is around 95 per cent for SNe I for all magnitudes. For SNe II, it is over 90 per cent for objects with G ≤ 19. GS-TEC also estimates the redshifts with errors of σz ≤ 0.01 and epochs with uncertainties σt ≃ 13 and 32 d for SNe I and SNe II, respectively. GS-TEC has been designed to be used on partially calibrated Gaia data. However, the concept could be extended to other kinds of low-resolution spectra classification for ongoing surveys.

  9. An algorithm for temperature correcting substrate moisture measurements: aligning substrate moisture responses with environmental drivers in polytunnel-grown strawberry plants

    NASA Astrophysics Data System (ADS)

    Goodchild, Martin; Janes, Stuart; Jenkins, Malcolm; Nicholl, Chris; Kühn, Karl

    2015-04-01

    The aim of this work is to assess the use of temperature corrected substrate moisture data to improve the relationship between environmental drivers and the measurement of substrate moisture content in high porosity soil-free growing environments such as coir. Substrate moisture sensor data collected from strawberry plants grown in coir bags installed in a table-top system under a polytunnel illustrates the impact of temperature on capacitance-based moisture measurements. Substrate moisture measurements made in our coir arrangement possess the negative temperature coefficient of the permittivity of water where diurnal changes in moisture content oppose those of substrate temperature. The diurnal substrate temperature variation was seen to range from 7° C to 25° C resulting in a clearly observable temperature effect in substrate moisture content measurements during the 23 day test period. In the laboratory we measured the ML3 soil moisture sensor (ThetaProbe) response to temperature in Air, dry glass beads and water saturated glass beads and used a three-phase alpha (α) mixing model, also known as the Complex Refractive Index Model (CRIM), to derive the permittivity temperature coefficients for glass and water. We derived the α value and estimated the temperature coefficient for water - for sensors operating at 100MHz. Both results are good agreement with published data. By applying the CRIM equation with the temperature coefficients of glass and water the moisture temperature coefficient of saturated glass beads has been reduced by more than an order of magnitude to a moisture temperature coefficient of

  10. Monocular precrash vehicle detection: features and classifiers.

    PubMed

    Sun, Zehang; Bebis, George; Miller, Ronald

    2006-07-01

    Robust and reliable vehicle detection from images acquired by a moving vehicle (i.e., on-road vehicle detection) is an important problem with applications to driver assistance systems and autonomous, self-guided vehicles. The focus of this work is on the issues of feature extraction and classification for rear-view vehicle detection. Specifically, by treating the problem of vehicle detection as a two-class classification problem, we have investigated several different feature extraction methods such as principal component analysis, wavelets, and Gabor filters. To evaluate the extracted features, we have experimented with two popular classifiers, neural networks and support vector machines (SVMs). Based on our evaluation results, we have developed an on-board real-time monocular vehicle detection system that is capable of acquiring grey-scale images, using Ford's proprietary low-light camera, achieving an average detection rate of 10 Hz. Our vehicle detection algorithm consists of two main steps: a multiscale driven hypothesis generation step and an appearance-based hypothesis verification step. During the hypothesis generation step, image locations where vehicles might be present are extracted. This step uses multiscale techniques not only to speed up detection, but also to improve system robustness. The appearance-based hypothesis verification step verifies the hypotheses using Gabor features and SVMs. The system has been tested in Ford's concept vehicle under different traffic conditions (e.g., structured highway, complex urban streets, and varying weather conditions), illustrating good performance.

  11. 75 FR 707 - Classified National Security Information

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-05

    ... National Security Information Memorandum of December 29, 2009--Implementation of the Executive Order ``Classified National Security Information'' Order of December 29, 2009--Original Classification Authority #0... 13526 of December 29, 2009 Classified National Security Information This order prescribes a...

  12. A PC based neural network algorithm for measurement of heart rate variability.

    PubMed

    Foo, T T; Hull, S S; Cheung, J Y

    1995-01-01

    Heart Rate Variability has recently been shown as a viable index to predict sudden cardiac death. The goal of this research is to investigate the use of neural network technique to classify detected QRS complexes into normal and abnormal ones. A single layer perceptron neural network is used for this QRS pattern learning and classification. Results with real data showed that the algorithm gives a 99% correct QRS detection rate.

  13. Classifying adolescent attention-deficit/hyperactivity disorder (ADHD) based on functional and structural imaging.

    PubMed

    Iannaccone, Reto; Hauser, Tobias U; Ball, Juliane; Brandeis, Daniel; Walitza, Susanne; Brem, Silvia

    2015-10-01

    Attention-deficit/hyperactivity disorder (ADHD) is a common disabling psychiatric disorder associated with consistent deficits in error processing, inhibition and regionally decreased grey matter volumes. The diagnosis is based on clinical presentation, interviews and questionnaires, which are to some degree subjective and would benefit from verification through biomarkers. Here, pattern recognition of multiple discriminative functional and structural brain patterns was applied to classify adolescents with ADHD and controls. Functional activation features in a Flanker/NoGo task probing error processing and inhibition along with structural magnetic resonance imaging data served to predict group membership using support vector machines (SVMs). The SVM pattern recognition algorithm correctly classified 77.78% of the subjects with a sensitivity and specificity of 77.78% based on error processing. Predictive regions for controls were mainly detected in core areas for error processing and attention such as the medial and dorsolateral frontal areas reflecting deficient processing in ADHD (Hart et al., in Hum Brain Mapp 35:3083-3094, 2014), and overlapped with decreased activations in patients in conventional group comparisons. Regions more predictive for ADHD patients were identified in the posterior cingulate, temporal and occipital cortex. Interestingly despite pronounced univariate group differences in inhibition-related activation and grey matter volumes the corresponding classifiers failed or only yielded a poor discrimination. The present study corroborates the potential of task-related brain activation for classification shown in previous studies. It remains to be clarified whether error processing, which performed best here, also contributes to the discrimination of useful dimensions and subtypes, different psychiatric disorders, and prediction of treatment success across studies and sites.

  14. Brain-computer interface classifier for wheelchair commands using neural network with fuzzy particle swarm optimization.

    PubMed

    Chai, Rifai; Ling, Sai Ho; Hunter, Gregory P; Tran, Yvonne; Nguyen, Hung T

    2014-09-01

    This paper presents the classification of a three-class mental task-based brain-computer interface (BCI) that uses the Hilbert-Huang transform for the features extractor and fuzzy particle swarm optimization with cross-mutated-based artificial neural network (FPSOCM-ANN) for the classifier. The experiments were conducted on five able-bodied subjects and five patients with tetraplegia using electroencephalography signals from six channels, and different time-windows of data were examined to find the highest accuracy. For practical purposes, the best two channel combinations were chosen and presented. The three relevant mental tasks used for the BCI were letter composing, arithmetic, and Rubik's cube rolling forward, and these are associated with three wheelchair commands: left, right, and forward, respectively. An additional eyes closed task was collected for testing and used for on-off commands. The results show a dominant alpha wave during eyes closure with average classification accuracy above 90%. The accuracies for patients with tetraplegia were lower compared to the able-bodied subjects; however, this was improved by increasing the duration of the time-windows. The FPSOCM-ANN provides improved accuracies compared to genetic algorithm-based artificial neural network (GA-ANN) for three mental tasks-based BCI classifications with the best classification accuracy achieved for a 7-s time-window: 84.4% (FPSOCM-ANN) compared to 77.4% (GA-ANN). More comparisons on feature extractors and classifiers were included. For two-channel classification, the best two channels were O1 and C4, followed by second best at P3 and O2, and third best at C3 and O2. Mental arithmetic was the most correctly classified task, followed by mental Rubik's cube rolling forward and mental letter composing.

  15. Evolution of a computer program for classifying protein segments as transmembrane domains using genetic programming

    SciTech Connect

    Koza, J.R.

    1994-12-31

    The recently-developed genetic programming paradigm is used to evolve a computer program to classify a given protein segment as being a transmembrane domain or non-transmembrane area of the protein. Genetic programming starts with a primordial ooze of randomly generated computer programs composed of available programmatic ingredients and then genetically breeds the population of programs using the Darwinian principle of survival of the fittest and an analog of the naturally occurring genetic operation of crossover (sexual recombination). Automatic function definition enables genetic programming to dynamically create subroutines dynamically during the run. Genetic programming is given a training set of differently-sized protein segments and their correct classification (but no biochemical knowledge, such as hydrophobicity values). Correlation is used as the fitness measure to drive the evolutionary process. The best genetically-evolved program achieves an out-of-sample correlation of 0.968 and an out-of-sample error rate of 1.6%. This error rate is better than that reported for four other algorithms reported at the First International Conference on Intelligent Systems for Molecular Biology. Our genetically evolved program is an instance of an algorithm discovered by an automated learning paradigm that is superior to that written by human investigators.

  16. 32 CFR 775.5 - Classified actions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Air Act (42 U.S.C. 7609 et seq.). (b) It should be noted that a classified EA/EIS serves the same “informed decisionmaking” purpose as does a published unclassified EA/EIS. Even though the classified EA/EIS... be considered by the decisionmaker for the proposed action. The content of a classified EA/EIS...

  17. 15 CFR 4.8 - Classified Information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Classified Information. 4.8 Section 4... INFORMATION Freedom of Information Act § 4.8 Classified Information. In processing a request for information..., the information shall be reviewed to determine whether it should remain classified. Ordinarily...

  18. 32 CFR 1602.8 - Classifying authority.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Classifying authority. 1602.8 Section 1602.8 National Defense Other Regulations Relating to National Defense SELECTIVE SERVICE SYSTEM DEFINITIONS § 1602.8 Classifying authority. The term classifying authority refers to any official or board who...

  19. Evolutionary design of a fuzzy classifier from data.

    PubMed

    Chang, Xiaoguang; Lilly, John H

    2004-08-01

    Genetic algorithms show powerful capabilities for automatically designing fuzzy systems from data, but many proposed methods must be subjected to some minimal structure assumptions, such as rule base size. In this paper, we also address the design of fuzzy systems from data. A new evolutionary approach is proposed for deriving a compact fuzzy classification system directly from data without any a priori knowledge or assumptions on the distribution of the data. At the beginning of the algorithm, the fuzzy classifier is empty with no rules in the rule base and no membership functions assigned to fuzzy variables. Then, rules and membership functions are automatically created and optimized in an evolutionary process. To accomplish this, parameters of the variable input spread inference training (VISIT) algorithm are used to code fuzzy systems on the training data set. Therefore, we can derive each individual fuzzy system via the VISIT algorithm, and then search the best one via genetic operations. To evaluate the fuzzy classifier, a fuzzy expert system acts as the fitness function. This fuzzy expert system can effectively evaluate the accuracy and compactness at the same time. In the application section, we consider four benchmark classification problems: the iris data, wine data, Wisconsin breast cancer data, and Pima Indian diabetes data. Comparisons of our method with others in the literature show the effectiveness of the proposed method.

  20. Wildfire smoke detection using temporospatial features and random forest classifiers

    NASA Astrophysics Data System (ADS)

    Ko, Byoungchul; Kwak, Joon-Young; Nam, Jae-Yeal

    2012-01-01

    We propose a wildfire smoke detection algorithm that uses temporospatial visual features and an ensemble of decision trees and random forest classifiers. In general, wildfire smoke detection is particularly important for early warning systems because smoke is usually generated before flames; in addition, smoke can be detected from a long distance owing to its diffusion characteristics. In order to detect wildfire smoke using a video camera, temporospatial characteristics such as color, wavelet coefficients, motion orientation, and a histogram of oriented gradients are extracted from the preceding 100 corresponding frames and the current keyframe. Two RFs are then trained using independent temporal and spatial feature vectors. Finally, a candidate block is declared as a smoke block if the average probability of two RFs in a smoke class is maximum. The proposed algorithm was successfully applied to various wildfire-smoke and smoke-colored videos and performed better than other related algorithms.

  1. Corrective work.

    ERIC Educational Resources Information Center

    Hill, Leslie A.

    1978-01-01

    Discusses some general principles for planning corrective instruction and exercises in English as a second language, and follows with examples from the areas of phonemics, phonology, lexicon, idioms, morphology, and syntax. (IFS/WGA)

  2. Vision-based posture recognition using an ensemble classifier and a vote filter

    NASA Astrophysics Data System (ADS)

    Ji, Peng; Wu, Changcheng; Xu, Xiaonong; Song, Aiguo; Li, Huijun

    2016-10-01

    Posture recognition is a very important Human-Robot Interaction (HRI) way. To segment effective posture from an image, we propose an improved region grow algorithm which combining with the Single Gauss Color Model. The experiment shows that the improved region grow algorithm can get the complete and accurate posture than traditional Single Gauss Model and region grow algorithm, and it can eliminate the similar region from the background at the same time. In the posture recognition part, and in order to improve the recognition rate, we propose a CNN ensemble classifier, and in order to reduce the misjudgments during a continuous gesture control, a vote filter is proposed and applied to the sequence of recognition results. Comparing with CNN classifier, the CNN ensemble classifier we proposed can yield a 96.27% recognition rate, which is better than that of CNN classifier, and the proposed vote filter can improve the recognition result and reduce the misjudgments during the consecutive gesture switch.

  3. A configurable-hardware document-similarity classifier to detect web attacks.

    SciTech Connect

    Ulmer, Craig D.; Gokhale, Maya

    2010-04-01

    This paper describes our approach to adapting a text document similarity classifier based on the Term Frequency Inverse Document Frequency (TFIDF) metric to reconfigurable hardware. The TFIDF classifier is used to detect web attacks in HTTP data. In our reconfigurable hardware approach, we design a streaming, real-time classifier by simplifying an existing sequential algorithm and manipulating the classifier's model to allow decision information to be represented compactly. We have developed a set of software tools to help automate the process of converting training data to synthesizable hardware and to provide a means of trading off between accuracy and resource utilization. The Xilinx Virtex 5-LX implementation requires two orders of magnitude less memory than the original algorithm. At 166MB/s (80X the software) the hardware implementation is able to achieve Gigabit network throughput at the same accuracy as the original algorithm.

  4. Fibonacci Numbers and Computer Algorithms.

    ERIC Educational Resources Information Center

    Atkins, John; Geist, Robert

    1987-01-01

    The Fibonacci Sequence describes a vast array of phenomena from nature. Computer scientists have discovered and used many algorithms which can be classified as applications of Fibonacci's sequence. In this article, several of these applications are considered. (PK)

  5. 22 CFR 125.3 - Exports of classified technical data and classified defense articles.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Exports of classified technical data and... IN ARMS REGULATIONS LICENSES FOR THE EXPORT OF TECHNICAL DATA AND CLASSIFIED DEFENSE ARTICLES § 125.3 Exports of classified technical data and classified defense articles. (a) A request for authority...

  6. Applying an instance selection method to an evolutionary neural classifier design

    NASA Astrophysics Data System (ADS)

    Khritonenko, Dmitrii; Stanovov, Vladimir; Semenkin, Eugene

    2017-02-01

    In this paper the application of an instance selection algorithm to the design of a neural classifier is considered. A number of existing instance selection methods are presented. A new wrapper-method, whose main difference compared to other approaches is an iterative procedure for selecting training subsets from the dataset, is described. The approach is based on using training subsample selection probabilities for every instance. The value of these probabilities depends on the classification success for each measurement. An evolutionary algorithm for the design of a neural classifier is presented, which was used to test the efficiency of the presented approach. The described approach has been implemented and tested on a set of classification problems. The testing has shown that the presented algorithm allows the computational complexity to be decreased and the quality of the obtained classifiers to be increased. Compared to analogues found in scientific literature, it was shown that the presented algorithm is an effective tool for classification problem solving.

  7. Monitoring tool wear using classifier fusion

    NASA Astrophysics Data System (ADS)

    Kannatey-Asibu, Elijah; Yum, Juil; Kim, T. H.

    2017-02-01

    Real time monitoring of manufacturing processes using a single sensor often poses significant challenge. Sensor fusion has thus been extensively investigated in recent years for process monitoring with significant improvement in performance. This paper presents the results for a monitoring system based on the concept of classifier fusion, and class-weighted voting is investigated to further enhance the system performance. Classifier weights are based on the overall performances of individual classifiers, and majority voting is used in decision making. Acoustic emission monitoring of tool wear during the coroning process is used to illustrate the concept. A classification rate of 87.7% was obtained for classifier fusion with unity weighting. When weighting was based on overall performance of the respective classifiers, the classification rate improved to 95.6%. Further using state performance weighting resulted in a 98.5% classification. Finally, the classifier fusion performance further increased to 99.7% when a penalty vote was applied on the weighting factor.

  8. Signature extension through the application of cluster matching algorithms to determine appropriate signature transformations

    NASA Technical Reports Server (NTRS)

    Lambeck, P. F.; Rice, D. P.

    1976-01-01

    Signature extension is intended to increase the space-time range over which a set of training statistics can be used to classify data without significant loss of recognition accuracy. A first cluster matching algorithm MASC (Multiplicative and Additive Signature Correction) was developed at the Environmental Research Institute of Michigan to test the concept of using associations between training and recognition area cluster statistics to define an average signature transformation. A more recent signature extension module CROP-A (Cluster Regression Ordered on Principal Axis) has shown evidence of making significant associations between training and recognition area cluster statistics, with the clusters to be matched being selected automatically by the algorithm.

  9. Classifying the Quantum Phases of Matter

    DTIC Science & Technology

    2015-01-01

    CLASSIFYING THE QUANTUM PHASES OF MATTER CALIFORNIA INSTITUTE OF TECHNOLOGY JANUARY 2015 FINAL TECHNICAL REPORT...REPORT 3. DATES COVERED (From - To) JAN 2012 – AUG 2014 4. TITLE AND SUBTITLE CLASSIFYING THE QUANTUM PHASES OF MATTER 5a. CONTRACT NUMBER FA8750-12-2...16 Jan 09. 13. SUPPLEMENTARY NOTES 14. ABSTRACT This is the final report for "Classifying the Quantum Phases of Matter," FA8750-12-2-0308. Among

  10. Developing collaborative classifiers using an expert-based model

    USGS Publications Warehouse

    Mountrakis, G.; Watts, R.; Luo, L.; Wang, Jingyuan

    2009-01-01

    This paper presents a hierarchical, multi-stage adaptive strategy for image classification. We iteratively apply various classification methods (e.g., decision trees, neural networks), identify regions of parametric and geographic space where accuracy is low, and in these regions, test and apply alternate methods repeating the process until the entire image is classified. Currently, classifiers are evaluated through human input using an expert-based system; therefore, this paper acts as the proof of concept for collaborative classifiers. Because we decompose the problem into smaller, more manageable sub-tasks, our classification exhibits increased flexibility compared to existing methods since classification methods are tailored to the idiosyncrasies of specific regions. A major benefit of our approach is its scalability and collaborative support since selected low-accuracy classifiers can be easily replaced with others without affecting classification accuracy in high accuracy areas. At each stage, we develop spatially explicit accuracy metrics that provide straightforward assessment of results by non-experts and point to areas that need algorithmic improvement or ancillary data. Our approach is demonstrated in the task of detecting impervious surface areas, an important indicator for human-induced alterations to the environment, using a 2001 Landsat scene from Las Vegas, Nevada. ?? 2009 American Society for Photogrammetry and Remote Sensing.

  11. Weighted Hybrid Decision Tree Model for Random Forest Classifier

    NASA Astrophysics Data System (ADS)

    Kulkarni, Vrushali Y.; Sinha, Pradeep K.; Petare, Manisha C.

    2016-06-01

    Random Forest is an ensemble, supervised machine learning algorithm. An ensemble generates many classifiers and combines their results by majority voting. Random forest uses decision tree as base classifier. In decision tree induction, an attribute split/evaluation measure is used to decide the best split at each node of the decision tree. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation among them. The work presented in this paper is related to attribute split measures and is a two step process: first theoretical study of the five selected split measures is done and a comparison matrix is generated to understand pros and cons of each measure. These theoretical results are verified by performing empirical analysis. For empirical analysis, random forest is generated using each of the five selected split measures, chosen one at a time. i.e. random forest using information gain, random forest using gain ratio, etc. The next step is, based on this theoretical and empirical analysis, a new approach of hybrid decision tree model for random forest classifier is proposed. In this model, individual decision tree in Random Forest is generated using different split measures. This model is augmented by weighted voting based on the strength of individual tree. The new approach has shown notable increase in the accuracy of random forest.

  12. Combining Classifiers Using Their Receiver Operating Characteristics and Maximum Likelihood Estimation*

    PubMed Central

    Haker, Steven; Wells, William M.; Warfield, Simon K.; Talos, Ion-Florin; Bhagwat, Jui G.; Goldberg-Zimring, Daniel; Mian, Asim; Ohno-Machado, Lucila; Zou, Kelly H.

    2010-01-01

    In any medical domain, it is common to have more than one test (classifier) to diagnose a disease. In image analysis, for example, there is often more than one reader or more than one algorithm applied to a certain data set. Combining of classifiers is often helpful, but determining the way in which classifiers should be combined is not trivial. Standard strategies are based on learning classifier combination functions from data. We describe a simple strategy to combine results from classifiers that have not been applied to a common data set, and therefore can not undergo this type of joint training. The strategy, which assumes conditional independence of classifiers, is based on the calculation of a combined Receiver Operating Characteristic (ROC) curve, using maximum likelihood analysis to determine a combination rule for each ROC operating point. We offer some insights into the use of ROC analysis in the field of medical imaging. PMID:16685884

  13. Statistical and Machine-Learning Classifier Framework to Improve Pulse Shape Discrimination System Design

    SciTech Connect

    Wurtz, R.; Kaplan, A.

    2015-10-28

    Pulse shape discrimination (PSD) is a variety of statistical classifier. Fully-­realized statistical classifiers rely on a comprehensive set of tools for designing, building, and implementing. PSD advances rely on improvements to the implemented algorithm. PSD advances can be improved by using conventional statistical classifier or machine learning methods. This paper provides the reader with a glossary of classifier-­building elements and their functions in a fully-­designed and operational classifier framework that can be used to discover opportunities for improving PSD classifier projects. This paper recommends reporting the PSD classifier’s receiver operating characteristic (ROC) curve and its behavior at a gamma rejection rate (GRR) relevant for realistic applications.

  14. A probabilistic Classifier System and its application in data mining.

    PubMed

    Muruzábal, Jorge

    2006-01-01

    The article is about a new Classifier System framework for classification tasks called BYP-CS (for BaYesian Predictive Classifier System). The proposed CS approach abandons the focus on high accuracy and addresses a well-posed Data Mining goal, namely, that of uncovering the low-uncertainty patterns of dependence that manifest often in the data. To attain this goal, BYP-CS uses a fair amount of probabilistic machinery, which brings its representation language closer to other related methods of interest in statistics and machine learning. On the practical side, the new algorithm is seen to yield stable learning of compact populations, and these still maintain a respectable amount of predictive power. Furthermore, the emerging rules self-organize in interesting ways, sometimes providing unexpected solutions to certain benchmark problems.

  15. Backscatter Correction Algorithm for TBI Treatment Conditions

    SciTech Connect

    Sanchez-Nieto, B.; Sanchez-Doblado, F.; Arrans, R.; Terron, J.A.; Errazquin, L.

    2015-01-15

    The accuracy requirements in target dose delivery is, according to ICRU, ±5%. This is so not only in standard radiotherapy but also in total body irradiation (TBI). Physical dosimetry plays an important role in achieving this recommended level. The semi-infinite phantoms, customarily used for dosimetry purposes, give scatter conditions different to those of the finite thickness of the patient. So dose calculated in patient’s points close to beam exit surface may be overestimated. It is then necessary to quantify the backscatter factor in order to decrease the uncertainty in this dose calculation. The backward scatter has been well studied at standard distances. The present work intends to evaluate the backscatter phenomenon under our particular TBI treatment conditions. As a consequence of this study, a semi-empirical expression has been derived to calculate (within 0.3% uncertainty) the backscatter factor. This factor depends lineally on the depth and exponentially on the underlying tissue. Differences found in the qualitative behavior with respect to standard distances are due to scatter in the bunker wall close to the measurement point.

  16. Development of the Landsat Data Continuity Mission Cloud Cover Assessment Algorithms

    USGS Publications Warehouse

    Scaramuzza, Pat; Bouchard, M.A.; Dwyer, J.L.

    2012-01-01

    The upcoming launch of the Operational Land Imager (OLI) will start the next era of the Landsat program. However, the Automated Cloud-Cover Assessment (CCA) (ACCA) algorithm used on Landsat 7 requires a thermal band and is thus not suited for OLI. There will be a thermal instrument on the Landsat Data Continuity Mission (LDCM)-the Thermal Infrared Sensor-which may not be available during all OLI collections. This illustrates a need for CCA for LDCM in the absence of thermal data. To research possibilities for full-resolution OLI cloud assessment, a global data set of 207 Landsat 7 scenes with manually generated cloud masks was created. It was used to evaluate the ACCA algorithm, showing that the algorithm correctly classified 79.9% of a standard test subset of 3.95 109 pixels. The data set was also used to develop and validate two successor algorithms for use with OLI data-one derived from an off-the-shelf machine learning package and one based on ACCA but enhanced by a simple neural network. These comprehensive CCA algorithms were shown to correctly classify pixels as cloudy or clear 88.5% and 89.7% of the time, respectively.

  17. Classifier Subset Selection for the Stacked Generalization Method Applied to Emotion Recognition in Speech

    PubMed Central

    Álvarez, Aitor; Sierra, Basilio; Arruti, Andoni; López-Gil, Juan-Miguel; Garay-Vitoria, Nestor

    2015-01-01

    In this paper, a new supervised classification paradigm, called classifier subset selection for stacked generalization (CSS stacking), is presented to deal with speech emotion recognition. The new approach consists of an improvement of a bi-level multi-classifier system known as stacking generalization by means of an integration of an estimation of distribution algorithm (EDA) in the first layer to select the optimal subset from the standard base classifiers. The good performance of the proposed new paradigm was demonstrated over different configurations and datasets. First, several CSS stacking classifiers were constructed on the RekEmozio dataset, using some specific standard base classifiers and a total of 123 spectral, quality and prosodic features computed using in-house feature extraction algorithms. These initial CSS stacking classifiers were compared to other multi-classifier systems and the employed standard classifiers built on the same set of speech features. Then, new CSS stacking classifiers were built on RekEmozio using a different set of both acoustic parameters (extended version of the Geneva Minimalistic Acoustic Parameter Set (eGeMAPS)) and standard classifiers and employing the best meta-classifier of the initial experiments. The performance of these two CSS stacking classifiers was evaluated and compared. Finally, the new paradigm was tested on the well-known Berlin Emotional Speech database. We compared the performance of single, standard stacking and CSS stacking systems using the same parametrization of the second phase. All of the classifications were performed at the categorical level, including the six primary emotions plus the neutral one. PMID:26712757

  18. Optimally splitting cases for training and testing high dimensional classifiers

    PubMed Central

    2011-01-01

    Background We consider the problem of designing a study to develop a predictive classifier from high dimensional data. A common study design is to split the sample into a training set and an independent test set, where the former is used to develop the classifier and the latter to evaluate its performance. In this paper we address the question of what proportion of the samples should be devoted to the training set. How does this proportion impact the mean squared error (MSE) of the prediction accuracy estimate? Results We develop a non-parametric algorithm for determining an optimal splitting proportion that can be applied with a specific dataset and classifier algorithm. We also perform a broad simulation study for the purpose of better understanding the factors that determine the best split proportions and to evaluate commonly used splitting strategies (1/2 training or 2/3 training) under a wide variety of conditions. These methods are based on a decomposition of the MSE into three intuitive component parts. Conclusions By applying these approaches to a number of synthetic and real microarray datasets we show that for linear classifiers the optimal proportion depends on the overall number of samples available and the degree of differential expression between the classes. The optimal proportion was found to depend on the full dataset size (n) and classification accuracy - with higher accuracy and smaller n resulting in more assigned to the training set. The commonly used strategy of allocating 2/3rd of cases for training was close to optimal for reasonable sized datasets (n ≥ 100) with strong signals (i.e. 85% or greater full dataset accuracy). In general, we recommend use of our nonparametric resampling approach for determing the optimal split. This approach can be applied to any dataset, using any predictor development method, to determine the best split. PMID:21477282

  19. Automatic class labeling of classified imagery using a hyperspectral library

    NASA Astrophysics Data System (ADS)

    Parshakov, Ilia

    Image classification is a fundamental information extraction procedure in remote sensing that is used in land-cover and land-use mapping. Despite being considered as a replacement for manual mapping, it still requires some degree of analyst intervention. This makes the process of image classification time consuming, subjective, and error prone. For example, in unsupervised classification, pixels are automatically grouped into classes, but the user has to manually label the classes as one land-cover type or another. As a general rule, the larger the number of classes, the more difficult it is to assign meaningful class labels. A fully automated post-classification procedure for class labeling was developed in an attempt to alleviate this problem. It labels spectral classes by matching their spectral characteristics with reference spectra. A Landsat TM image of an agricultural area was used for performance assessment. The algorithm was used to label a 20- and 100-class image generated by the ISODATA classifier. The 20-class image was used to compare the technique with the traditional manual labeling of classes, and the 100-class image was used to compare it with the Spectral Angle Mapper and Maximum Likelihood classifiers. The proposed technique produced a map that had an overall accuracy of 51%, outperforming the manual labeling (40% to 45% accuracy, depending on the analyst performing the labeling) and the Spectral Angle Mapper classifier (39%), but underperformed compared to the Maximum Likelihood technique (53% to 63%). The newly developed class-labeling algorithm provided better results for alfalfa, beans, corn, grass and sugar beet, whereas canola, corn, fallow, flax, potato, and wheat were identified with similar or lower accuracy, depending on the classifier it was compared with.

  20. 48 CFR 927.207 - Classified contracts.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 48 Federal Acquisition Regulations System 5 2014-10-01 2014-10-01 false Classified contracts. 927.207 Section 927.207 Federal Acquisition Regulations System DEPARTMENT OF ENERGY GENERAL CONTRACTING REQUIREMENTS PATENTS, DATA, AND COPYRIGHTS Patents 927.207 Classified contracts....

  1. Standardizing the protocol for hemispherical photographs: accuracy assessment of binarization algorithms.

    PubMed

    Glatthorn, Jonas; Beckschäfer, Philip

    2014-01-01

    Hemispherical photography is a well-established method to optically assess ecological parameters related to plant canopies; e.g. ground-level light regimes and the distribution of foliage within the crown space. Interpreting hemispherical photographs involves classifying pixels as either sky or vegetation. A wide range of automatic thresholding or binarization algorithms exists to classify the photographs. The variety in methodology hampers ability to compare results across studies. To identify an optimal threshold selection method, this study assessed the accuracy of seven binarization methods implemented in software currently available for the processing of hemispherical photographs. Therefore, binarizations obtained by the algorithms were compared to reference data generated through a manual binarization of a stratified random selection of pixels. This approach was adopted from the accuracy assessment of map classifications known from remote sensing studies. Percentage correct (Pc) and kappa-statistics (K) were calculated. The accuracy of the algorithms was assessed for photographs taken with automatic exposure settings (auto-exposure) and photographs taken with settings which avoid overexposure (histogram-exposure). In addition, gap fraction values derived from hemispherical photographs were compared with estimates derived from the manually classified reference pixels. All tested algorithms were shown to be sensitive to overexposure. Three of the algorithms showed an accuracy which was high enough to be recommended for the processing of histogram-exposed hemispherical photographs: "Minimum" (Pc 98.8%; K 0.952), "Edge Detection" (Pc 98.1%; K 0.950), and "Minimum Histogram" (Pc 98.1%; K 0.947). The Minimum algorithm overestimated gap fraction least of all (11%). The overestimation by the algorithms Edge Detection (63%) and Minimum Histogram (67%) were considerably larger. For the remaining four evaluated algorithms (IsoData, Maximum Entropy, MinError, and Otsu

  2. A distributed approach for optimizing cascaded classifier topologies in real-time stream mining systems.

    PubMed

    Foo, Brian; van der Schaar, Mihaela

    2010-11-01

    In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.

  3. Enhancing atlas based segmentation with multiclass linear classifiers

    SciTech Connect

    Sdika, Michaël

    2015-12-15

    Purpose: To present a method to enrich atlases for atlas based segmentation. Such enriched atlases can then be used as a single atlas or within a multiatlas framework. Methods: In this paper, machine learning techniques have been used to enhance the atlas based segmentation approach. The enhanced atlas defined in this work is a pair composed of a gray level image alongside an image of multiclass classifiers with one classifier per voxel. Each classifier embeds local information from the whole training dataset that allows for the correction of some systematic errors in the segmentation and accounts for the possible local registration errors. The authors also propose to use these images of classifiers within a multiatlas framework: results produced by a set of such local classifier atlases can be combined using a label fusion method. Results: Experiments have been made on the in vivo images of the IBSR dataset and a comparison has been made with several state-of-the-art methods such as FreeSurfer and the multiatlas nonlocal patch based method of Coupé or Rousseau. These experiments show that their method is competitive with state-of-the-art methods while having a low computational cost. Further enhancement has also been obtained with a multiatlas version of their method. It is also shown that, in this case, nonlocal fusion is unnecessary. The multiatlas fusion can therefore be done efficiently. Conclusions: The single atlas version has similar quality as state-of-the-arts multiatlas methods but with the computational cost of a naive single atlas segmentation. The multiatlas version offers a improvement in quality and can be done efficiently without a nonlocal strategy.

  4. The effect of atmospheric and topographic correction on pixel-based image composites: Improved forest cover detection in mountain environments

    NASA Astrophysics Data System (ADS)

    Vanonckelen, Steven; Lhermitte, Stef; Van Rompaey, Anton

    2015-03-01

    Quantification of forest cover is essential as a tool to stimulate forest management and conservation. Image compositing techniques that sample the most suited pixel from multi-temporal image acquisitions, provide an important tool for forest cover detection as they provide alternatives for missing data due to cloud cover and data discontinuities. At present, however, it is not clear to which extent forest cover detection based on compositing can be improved if the source imagery is firstly corrected for topographic distortions on a pixel-basis. In this study, the results of a pixel compositing algorithm with and without preprocessing topographic correction are compared for a study area covering 9 Landsat footprints in the Romanian Carpathians based on two different classifiers: Maximum Likelihood (ML) and Support Vector Machine (SVM). Results show that classifier selection has a stronger impact on the classification accuracy than topographic correction. Finally, application of the optimal method (SVM classifier with topographic correction) on the Romanian Carpathian Ecoregion between 1985, 1995 and 2010 shows a steady greening due to more afforestation than deforestation.

  5. Organizational coevolutionary classifiers with fuzzy logic used in intrusion detection

    NASA Astrophysics Data System (ADS)

    Chen, Zhenguo

    2009-07-01

    Intrusion detection is an important technique in the defense-in-depth network security framework and a hot topic in computer security in recent years. To solve the intrusion detection question, we introduce the fuzzy logic into Organization CoEvolutionary algorithm [1] and present the algorithm of Organization CoEvolutionary Classification with Fuzzy Logic. In this paper, we give an intrusion detection models based on Organization CoEvolutionary Classification with Fuzzy Logic. After illustrating our model with a representative dataset and applying it to the real-world network datasets KDD Cup 1999. The experimental result shown that the intrusion detection based on Organizational Coevolutionary Classifiers with Fuzzy Logic can give higher recognition accuracy than the general method.

  6. Object tracking by co-trained classifiers and particle filters

    NASA Astrophysics Data System (ADS)

    Tang, Liang; Li, Shanqing; Liu, Keyan; Wang, Lei

    2010-01-01

    This paper presents an online object tracking method, in which co-training and particle filters algorithms cooperate and complement each other for robust and effective tracking. Under framework of particle filters, the semi-supervised cotraining algorithm is adopted to construct, on-line update, and mutually boost two complementary object classifiers, which consequently improves discriminant ability of particles and its adaptability to appearance variants caused by illumination changing, pose verying, camera shaking, and occlusion. Meanwhile, to make sampling procedure more efficient, knowledge from coarse confidence maps and spatial-temporal constraints are introduced by importance sampling. It improves not only the accuracy and efficiency of sampling procedure, but also provides more reliable training samples for co-training. Experimental results verify the effectiveness and robustness of our method.

  7. Cancer Classification in Microarray Data using a Hybrid Selective Independent Component Analysis and υ-Support Vector Machine Algorithm

    PubMed Central

    Saberkari, Hamidreza; Shamsi, Mousa; Joroughi, Mahsa; Golabi, Faegheh; Sedaaghi, Mohammad Hossein

    2014-01-01

    Microarray data have an important role in identification and classification of the cancer tissues. Having a few samples of microarrays in cancer researches is always one of the most concerns which lead to some problems in designing the classifiers. For this matter, preprocessing gene selection techniques should be utilized before classification to remove the noninformative genes from the microarray data. An appropriate gene selection method can significantly improve the performance of cancer classification. In this paper, we use selective independent component analysis (SICA) for decreasing the dimension of microarray data. Using this selective algorithm, we can solve the instability problem occurred in the case of employing conventional independent component analysis (ICA) methods. First, the reconstruction error and selective set are analyzed as independent components of each gene, which have a small part in making error in order to reconstruct new sample. Then, some of the modified support vector machine (υ-SVM) algorithm sub-classifiers are trained, simultaneously. Eventually, the best sub-classifier with the highest recognition rate is selected. The proposed algorithm is applied on three cancer datasets (leukemia, breast cancer and lung cancer datasets), and its results are compared with other existing methods. The results illustrate that the proposed algorithm (SICA + υ-SVM) has higher accuracy and validity in order to increase the classification accuracy. Such that, our proposed algorithm exhibits relative improvements of 3.3% in correctness rate over ICA + SVM and SVM algorithms in lung cancer dataset. PMID:25426433

  8. Cancer Classification in Microarray Data using a Hybrid Selective Independent Component Analysis and υ-Support Vector Machine Algorithm.

    PubMed

    Saberkari, Hamidreza; Shamsi, Mousa; Joroughi, Mahsa; Golabi, Faegheh; Sedaaghi, Mohammad Hossein

    2014-10-01

    Microarray data have an important role in identification and classification of the cancer tissues. Having a few samples of microarrays in cancer researches is always one of the most concerns which lead to some problems in designing the classifiers. For this matter, preprocessing gene selection techniques should be utilized before classification to remove the noninformative genes from the microarray data. An appropriate gene selection method can significantly improve the performance of cancer classification. In this paper, we use selective independent component analysis (SICA) for decreasing the dimension of microarray data. Using this selective algorithm, we can solve the instability problem occurred in the case of employing conventional independent component analysis (ICA) methods. First, the reconstruction error and selective set are analyzed as independent components of each gene, which have a small part in making error in order to reconstruct new sample. Then, some of the modified support vector machine (υ-SVM) algorithm sub-classifiers are trained, simultaneously. Eventually, the best sub-classifier with the highest recognition rate is selected. The proposed algorithm is applied on three cancer datasets (leukemia, breast cancer and lung cancer datasets), and its results are compared with other existing methods. The results illustrate that the proposed algorithm (SICA + υ-SVM) has higher accuracy and validity in order to increase the classification accuracy. Such that, our proposed algorithm exhibits relative improvements of 3.3% in correctness rate over ICA + SVM and SVM algorithms in lung cancer dataset.

  9. Voltage correction power flow

    SciTech Connect

    Rajicic, D.; Ackovski, R.; Taleski, R. . Dept. of Electrical Engineering)

    1994-04-01

    A method for power flow solution of weakly meshed distribution and transmission networks is presented. It is based on oriented ordering of network elements. That allows an efficient construction of the loop impedance matrix and rational organization of the processes such as: power summation (backward sweep), current summation (backward sweep) and node voltage calculation (forward sweep). The first step of the algorithm is calculation of node voltages on the radial part of the network. The second step is calculation of the breakpoint currents. Then, the procedure continues with the first step, which is preceded by voltage correction. It is illustrated that using voltage correction approach, the iterative process of weakly meshed network voltage calculation is faster and more reliable.

  10. Logarithmic learning for generalized classifier neural network.

    PubMed

    Ozyildirim, Buse Melis; Avci, Mutlu

    2014-12-01

    Generalized classifier neural network is introduced as an efficient classifier among the others. Unless the initial smoothing parameter value is close to the optimal one, generalized classifier neural network suffers from convergence problem and requires quite a long time to converge. In this work, to overcome this problem, a logarithmic learning approach is proposed. The proposed method uses logarithmic cost function instead of squared error. Minimization of this cost function reduces the number of iterations used for reaching the minima. The proposed method is tested on 15 different data sets and performance of logarithmic learning generalized classifier neural network is compared with that of standard one. Thanks to operation range of radial basis function included by generalized classifier neural network, proposed logarithmic approach and its derivative has continuous values. This makes it possible to adopt the advantage of logarithmic fast convergence by the proposed learning method. Due to fast convergence ability of logarithmic cost function, training time is maximally decreased to 99.2%. In addition to decrease in training time, classification performance may also be improved till 60%. According to the test results, while the proposed method provides a solution for time requirement problem of generalized classifier neural network, it may also improve the classification accuracy. The proposed method can be considered as an efficient way for reducing the time requirement problem of generalized classifier neural network.

  11. Development and Testing of Data Mining Algorithms for Earth Observation

    NASA Technical Reports Server (NTRS)

    Glymour, Clark

    2005-01-01

    The new algorithms developed under this project included a principled procedure for classification of objects, events or circumstances according to a target variable when a very large number of potential predictor variables is available but the number of cases that can be used for training a classifier is relatively small. These "high dimensional" problems require finding a minimal set of variables -called the Markov Blanket-- sufficient for predicting the value of the target variable. An algorithm, the Markov Blanket Fan Search, was developed, implemented and tested on both simulated and real data in conjunction with a graphical model classifier, which was also implemented. Another algorithm developed and implemented in TETRAD IV for time series elaborated on work by C. Granger and N. Swanson, which in turn exploited some of our earlier work. The algorithms in question learn a linear time series model from data. Given such a time series, the simultaneous residual covariances, after factoring out time dependencies, may provide information about causal processes that occur more rapidly than the time series representation allow, so called simultaneous or contemporaneous causal processes. Working with A. Monetta, a graduate student from Italy, we produced the correct statistics for estimating the contemporaneous causal structure from time series data using the TETRAD IV suite of algorithms. Two economists, David Bessler and Kevin Hoover, have independently published applications using TETRAD style algorithms to the same purpose. These implementations and algorithmic developments were separately used in two kinds of studies of climate data: Short time series of geographically proximate climate variables predicting agricultural effects in California, and longer duration climate measurements of temperature teleconnections.

  12. Effects of cultural characteristics on building an emotion classifier through facial expression analysis

    NASA Astrophysics Data System (ADS)

    da Silva, Flávio Altinier Maximiano; Pedrini, Helio

    2015-03-01

    Facial expressions are an important demonstration of humanity's humors and emotions. Algorithms capable of recognizing facial expressions and associating them with emotions were developed and employed to compare the expressions that different cultural groups use to show their emotions. Static pictures of predominantly occidental and oriental subjects from public datasets were used to train machine learning algorithms, whereas local binary patterns, histogram of oriented gradients (HOGs), and Gabor filters were employed to describe the facial expressions for six different basic emotions. The most consistent combination, formed by the association of HOG filter and support vector machines, was then used to classify the other cultural group: there was a strong drop in accuracy, meaning that the subtle differences of facial expressions of each culture affected the classifier performance. Finally, a classifier was trained with images from both occidental and oriental subjects and its accuracy was higher on multicultural data, evidencing the need of a multicultural training set to build an efficient classifier.

  13. Correction of Facial Deformity in Sturge–Weber Syndrome

    PubMed Central

    Yamaguchi, Kazuaki; Lonic, Daniel; Chen, Chit

    2016-01-01

    Background: Although previous studies have reported soft-tissue management in surgical treatment of Sturge–Weber syndrome (SWS), there are few reports describing facial bone surgery in this patient group. The purpose of this study is to examine the validity of our multidisciplinary algorithm for correcting facial deformities associated with SWS. To the best of our knowledge, this is the first study on orthognathic surgery for SWS patients. Methods: A retrospective chart review included 2 SWS patients who completed the surgical treatment algorithm. Radiographic and clinical data were recorded, and a treatment algorithm was derived. Results: According to the Roach classification, the first patient was classified as type I presenting with both facial and leptomeningeal vascular anomalies without glaucoma and the second patient as type II presenting only with a hemifacial capillary malformation. Considering positive findings in seizure history and intracranial vascular anomalies in the first case, the anesthetic management was modified to omit hypotensive anesthesia because of the potential risk of intracranial pressure elevation. Primarily, both patients underwent 2-jaw orthognathic surgery and facial bone contouring including genioplasty, zygomatic reduction, buccal fat pad removal, and masseter reduction without major complications. In the second step, the volume and distribution of facial soft tissues were altered by surgical resection and reposition. Both patients were satisfied with the surgical result. Conclusions: Our multidisciplinary algorithm can systematically detect potential risk factors. Correction of the asymmetric face by successive bone and soft-tissue surgery enables the patients to reduce their psychosocial burden and increase their quality of life. PMID:27622111

  14. Recognition of multiple imbalanced cancer types based on DNA microarray data using ensemble classifiers.

    PubMed

    Yu, Hualong; Hong, Shufang; Yang, Xibei; Ni, Jun; Dan, Yuanyuan; Qin, Bin

    2013-01-01

    DNA microarray technology can measure the activities of tens of thousands of genes simultaneously, which provides an efficient way to diagnose cancer at the molecular level. Although this strategy has attracted significant research attention, most studies neglect an important problem, namely, that most DNA microarray datasets are skewed, which causes traditional learning algorithms to produce inaccurate results. Some studies have considered this problem, yet they merely focus on binary-class problem. In this paper, we dealt with multiclass imbalanced classification problem, as encountered in cancer DNA microarray, by using ensemble learning. We utilized one-against-all coding strategy to transform multiclass to multiple binary classes, each of them carrying out feature subspace, which is an evolving version of random subspace that generates multiple diverse training subsets. Next, we introduced one of two different correction technologies, namely, decision threshold adjustment or random undersampling, into each training subset to alleviate the damage of class imbalance. Specifically, support vector machine was used as base classifier, and a novel voting rule called counter voting was presented for making a final decision. Experimental results on eight skewed multiclass cancer microarray datasets indicate that unlike many traditional classification approaches, our methods are insensitive to class imbalance.

  15. How Is Acute Lymphocytic Leukemia Classified?

    MedlinePlus

    ... Adults Early Detection, Diagnosis, and Types How Is Acute Lymphocytic Leukemia Classified? Most types of cancers are assigned numbered ... ALL are now named as follows: B-cell ALL Early pre-B ALL (also called pro-B ...

  16. 14 CFR 1216.310 - Classified actions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... actions. (a) Classification does not relieve NASA of the requirement to assess, document, and consider the environmental impacts of a proposed action. (b) When classified information can reasonably be separated...

  17. Cascaded multiple classifiers for secondary structure prediction.

    PubMed Central

    Ouali, M.; King, R. D.

    2000-01-01

    We describe a new classifier for protein secondary structure prediction that is formed by cascading together different types of classifiers using neural networks and linear discrimination. The new classifier achieves an accuracy of 76.7% (assessed by a rigorous full Jack-knife procedure) on a new nonredundant dataset of 496 nonhomologous sequences (obtained from G.J. Barton and J.A. Cuff). This database was especially designed to train and test protein secondary structure prediction methods, and it uses a more stringent definition of homologous sequence than in previous studies. We show that it is possible to design classifiers that can highly discriminate the three classes (H, E, C) with an accuracy of up to 78% for beta-strands, using only a local window and resampling techniques. This indicates that the importance of long-range interactions for the prediction of beta-strands has been probably previously overestimated. PMID:10892809

  18. 32 CFR 1633.1 - Classifying authority.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... reclassify a registrant other than a volunteer for induction, into Class 1-A out of another class prior to... issuing an induction order to a registrant, appropriately classify him if the Secretary of Defense...

  19. Design of partially supervised classifiers for multispectral image data

    NASA Technical Reports Server (NTRS)

    Jeon, Byeungwoo; Landgrebe, David

    1993-01-01

    A partially supervised classification problem is addressed, especially when the class definition and corresponding training samples are provided a priori only for just one particular class. In practical applications of pattern classification techniques, a frequently observed characteristic is the heavy, often nearly impossible requirements on representative prior statistical class characteristics of all classes in a given data set. Considering the effort in both time and man-power required to have a well-defined, exhaustive list of classes with a corresponding representative set of training samples, this 'partially' supervised capability would be very desirable, assuming adequate classifier performance can be obtained. Two different classification algorithms are developed to achieve simplicity in classifier design by reducing the requirement of prior statistical information without sacrificing significant classifying capability. The first one is based on optimal significance testing, where the optimal acceptance probability is estimated directly from the data set. In the second approach, the partially supervised classification is considered as a problem of unsupervised clustering with initially one known cluster or class. A weighted unsupervised clustering procedure is developed to automatically define other classes and estimate their class statistics. The operational simplicity thus realized should make these partially supervised classification schemes very viable tools in pattern classification.

  20. PPCM: Combing Multiple Classifiers to Improve Protein-Protein Interaction Prediction

    PubMed Central

    Yao, Jianzhuang; Guo, Hong; Yang, Xiaohan

    2015-01-01

    Determining protein-protein interaction (PPI) in biological systems is of considerable importance, and prediction of PPI has become a popular research area. Although different classifiers have been developed for PPI prediction, no single classifier seems to be able to predict PPI with high confidence. We postulated that by combining individual classifiers the accuracy of PPI prediction could be improved. We developed a method called protein-protein interaction prediction classifiers merger (PPCM), and this method combines output from two PPI prediction tools, GO2PPI and Phyloprof, using Random Forests algorithm. The performance of PPCM was tested by area under the curve (AUC) using an assembled Gold Standard database that contains both positive and negative PPI pairs. Our AUC test showed that PPCM significantly improved the PPI prediction accuracy over the corresponding individual classifiers. We found that additional classifiers incorporated into PPCM could lead to further improvement in the PPI prediction accuracy. Furthermore, cross species PPCM could achieve competitive and even better prediction accuracy compared to the single species PPCM. This study established a robust pipeline for PPI prediction by integrating multiple classifiers using Random Forests algorithm. This pipeline will be useful for predicting PPI in nonmodel species. PMID:26539460

  1. PPCM: Combing Multiple Classifiers to Improve Protein-Protein Interaction Prediction

    DOE PAGES

    Yao, Jianzhuang; Guo, Hong; Yang, Xiaohan

    2015-01-01

    Determining protein-protein interaction (PPI) in biological systems is of considerable importance, and prediction of PPI has become a popular research area. Although different classifiers have been developed for PPI prediction, no single classifier seems to be able to predict PPI with high confidence. We postulated that by combining individual classifiers the accuracy of PPI prediction could be improved. We developed a method called protein-protein interaction prediction classifiers merger (PPCM), and this method combines output from two PPI prediction tools, GO2PPI and Phyloprof, using Random Forests algorithm. The performance of PPCM was tested by area under the curve (AUC) using anmore » assembled Gold Standard database that contains both positive and negative PPI pairs. Our AUC test showed that PPCM significantly improved the PPI prediction accuracy over the corresponding individual classifiers. We found that additional classifiers incorporated into PPCM could lead to further improvement in the PPI prediction accuracy. Furthermore, cross species PPCM could achieve competitive and even better prediction accuracy compared to the single species PPCM. This study established a robust pipeline for PPI prediction by integrating multiple classifiers using Random Forests algorithm. This pipeline will be useful for predicting PPI in nonmodel species.« less

  2. Influence of atmospheric correction on image classification for irrigated agriculture in the Lower Colorado River Basin

    NASA Astrophysics Data System (ADS)

    Wei, X.

    2012-12-01

    Atmospheric correction is essential for accurate quantitative information retrieval from satellite imagery. In this paper, we applied the atmospheric correction algorithm, Second Simulation of a Satellite Signal in the Solar Spectrum (6S) radiative transfer code, to retrieve surface reflectance from Landsat 5 Thematic Mapper (TM) imagery for the Palo Verde Irrigation District (PVID) within the lower Colorado River basin. The 6S code was implemented with the input data of visibility, aerosol optical depth, pressure, temperature, water vapour, and ozone from local measurements. The 6S corrected image of PVID was classified into the irrigated agriculture of alfalfa, cotton, melons, corn, grass, and vegetables. We performed multiple classification methods of maximum likelihood, fuzzy means, and object-oriented classification methods. Using field crop type data, we conducted accuracy assessment for the results from 6S corrected image and uncorrected image and found a consistent improvement of classification accuracy for 6S corrected image. The study proves that 6S code is a robust atmospheric correction method in providing a better simulation of surface reflectance and improving image classification accuracy.;

  3. Extending the Error Correction Capability of Linear Codes,

    DTIC Science & Technology

    be made to tolerate and correct up to (k-1) bit failures. Thus if the classical error correction bounds are assumed, a linear transmission code used...in digital circuitry is under-utilized. For example, the single- error - correction , double-error-detection Hamming code could be used to correct up to...two bit failures with some additional error correction circuitry. A simple algorithm for correcting these extra errors in linear codoes is presented. (Author)

  4. Optimal two-stage enrichment design correcting for biomarker misclassification.

    PubMed

    Zang, Yong; Guo, Beibei

    2015-11-26

    The enrichment design is an important clinical trial design to detect the treatment effect of the molecularly targeted agent (MTA) in personalized medicine. Under this design, patients are stratified into marker-positive and marker-negative subgroups based on their biomarker statuses and only the marker-positive patients are enrolled into the trial and randomized to receive either the MTA or a standard treatment. As the biomarker plays a key role in determining the enrollment of the trial, a misclassification of the biomarker can induce substantial bias, undermine the integrity of the trial, and seriously affect the treatment evaluation. In this paper, we propose a two-stage optimal enrichment design that utilizes the surrogate marker to correct for the biomarker misclassification. The proposed design is optimal in the sense that it maximizes the probability of correctly classifying each patient's biomarker status based on the surrogate marker information. In addition, after analytically deriving the bias caused by the biomarker misclassification, we develop a likelihood ratio test based on the EM algorithm to correct for such bias. We conduct comprehensive simulation studies to investigate the operating characteristics of the optimal design and the results confirm the desirable performance of the proposed design.

  5. A CORRECTION.

    PubMed

    Johnson, D

    1940-03-22

    IN a recently published volume on "The Origin of Submarine Canyons" the writer inadvertently credited to A. C. Veatch an excerpt from a submarine chart actually contoured by P. A. Smith, of the U. S. Coast and Geodetic Survey. The chart in question is Chart IVB of Special Paper No. 7 of the Geological Society of America entitled "Atlantic Submarine Valleys of the United States and the Congo Submarine Valley, by A. C. Veatch and P. A. Smith," and the excerpt appears as Plate III of the volume fist cited above. In view of the heavy labor involved in contouring the charts accompanying the paper by Veatch and Smith and the beauty of the finished product, it would be unfair to Mr. Smith to permit the error to go uncorrected. Excerpts from two other charts are correctly ascribed to Dr. Veatch.

  6. Toward Improving Electrocardiogram (ECG) Biometric Verification using Mobile Sensors: A Two-Stage Classifier Approach.

    PubMed

    Tan, Robin; Perkowski, Marek

    2017-02-20

    Electrocardiogram (ECG) signals sensed from mobile devices pertain the potential for biometric identity recognition applicable in remote access control systems where enhanced data security is demanding. In this study, we propose a new algorithm that consists of a two-stage classifier combining random forest and wavelet distance measure through a probabilistic threshold schema, to improve the effectiveness and robustness of a biometric recognition system using ECG data acquired from a biosensor integrated into mobile devices. The proposed algorithm is evaluated using a mixed dataset from 184 subjects under different health conditions. The proposed two-stage classifier achieves a total of 99.52% subject verification accuracy, better than the 98.33% accuracy from random forest alone and 96.31% accuracy from wavelet distance measure algorithm alone. These results demonstrate the superiority of the proposed algorithm for biometric identification, hence supporting its practicality in areas such as cloud data security, cyber-security or remote healthcare systems.

  7. Toward Improving Electrocardiogram (ECG) Biometric Verification using Mobile Sensors: A Two-Stage Classifier Approach

    PubMed Central

    Tan, Robin; Perkowski, Marek

    2017-01-01

    Electrocardiogram (ECG) signals sensed from mobile devices pertain the potential for biometric identity recognition applicable in remote access control systems where enhanced data security is demanding. In this study, we propose a new algorithm that consists of a two-stage classifier combining random forest and wavelet distance measure through a probabilistic threshold schema, to improve the effectiveness and robustness of a biometric recognition system using ECG data acquired from a biosensor integrated into mobile devices. The proposed algorithm is evaluated using a mixed dataset from 184 subjects under different health conditions. The proposed two-stage classifier achieves a total of 99.52% subject verification accuracy, better than the 98.33% accuracy from random forest alone and 96.31% accuracy from wavelet distance measure algorithm alone. These results demonstrate the superiority of the proposed algorithm for biometric identification, hence supporting its practicality in areas such as cloud data security, cyber-security or remote healthcare systems. PMID:28230745

  8. 77 FR 72199 - Technical Corrections; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ... COMMISSION 10 CFR Part 171 RIN 3150-AJ16 Technical Corrections; Correction AGENCY: Nuclear Regulatory... corrections, including updating the street address for the Region I office, correcting authority citations and... rule. DATES: The correction is effective on December 5, 2012. FOR FURTHER INFORMATION CONTACT:...

  9. A web-based neurological pain classifier tool utilizing Bayesian decision theory for pain classification in spinal cord injury patients

    NASA Astrophysics Data System (ADS)

    Verma, Sneha K.; Chun, Sophia; Liu, Brent J.

    2014-03-01

    Pain is a common complication after spinal cord injury with prevalence estimates ranging 77% to 81%, which highly affects a patient's lifestyle and well-being. In the current clinical setting paper-based forms are used to classify pain correctly, however, the accuracy of diagnoses and optimal management of pain largely depend on the expert reviewer, which in many cases is not possible because of very few experts in this field. The need for a clinical decision support system that can be used by expert and non-expert clinicians has been cited in literature, but such a system has not been developed. We have designed and developed a stand-alone tool for correctly classifying pain type in spinal cord injury (SCI) patients, using Bayesian decision theory. Various machine learning simulation methods are used to verify the algorithm using a pilot study data set, which consists of 48 patients data set. The data set consists of the paper-based forms, collected at Long Beach VA clinic with pain classification done by expert in the field. Using the WEKA as the machine learning tool we have tested on the 48 patient dataset that the hypothesis that attributes collected on the forms and the pain location marked by patients have very significant impact on the pain type classification. This tool will be integrated with an imaging informatics system to support a clinical study that will test the effectiveness of using Proton Beam radiotherapy for treating spinal cord injury (SCI) related neuropathic pain as an alternative to invasive surgical lesioning.

  10. A Modified Decision Tree Algorithm Based on Genetic Algorithm for Mobile User Classification Problem

    PubMed Central

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity. PMID:24688389

  11. A modified decision tree algorithm based on genetic algorithm for mobile user classification problem.

    PubMed

    Liu, Dong-sheng; Fan, Shu-jiang

    2014-01-01

    In order to offer mobile customers better service, we should classify the mobile user firstly. Aimed at the limitations of previous classification methods, this paper puts forward a modified decision tree algorithm for mobile user classification, which introduced genetic algorithm to optimize the results of the decision tree algorithm. We also take the context information as a classification attributes for the mobile user and we classify the context into public context and private context classes. Then we analyze the processes and operators of the algorithm. At last, we make an experiment on the mobile user with the algorithm, we can classify the mobile user into Basic service user, E-service user, Plus service user, and Total service user classes and we can also get some rules about the mobile user. Compared to C4.5 decision tree algorithm and SVM algorithm, the algorithm we proposed in this paper has higher accuracy and more simplicity.

  12. Pairwise Classifier Ensemble with Adaptive Sub-Classifiers for fMRI Pattern Analysis.

    PubMed

    Kim, Eunwoo; Park, HyunWook

    2017-02-01

    The multi-voxel pattern analysis technique is applied to fMRI data for classification of high-level brain functions using pattern information distributed over multiple voxels. In this paper, we propose a classifier ensemble for multiclass classification in fMRI analysis, exploiting the fact that specific neighboring voxels can contain spatial pattern information. The proposed method converts the multiclass classification to a pairwise classifier ensemble, and each pairwise classifier consists of multiple sub-classifiers using an adaptive feature set for each class-pair. Simulated and real fMRI data were used to verify the proposed method. Intra- and inter-subject analyses were performed to compare the proposed method with several well-known classifiers, including single and ensemble classifiers. The comparison results showed that the proposed method can be generally applied to multiclass classification in both simulations and real fMRI analyses.

  13. Role of classifiers in multimedia content management

    NASA Astrophysics Data System (ADS)

    Naphade, Milind R.; Smith, John R.

    2003-01-01

    Enabling semantic detection and indexing is an important task in multimedia content management. Learning and classification techniques are increasingly relevant to the state of the art content management systems. From relevance feedback to semantic detection, there is a shift in the amount of supervision that precedes retrieval from light weight classifiers to heavy weight classifiers. In this paper we compare the performance of some popular classifiers for semantic video indexing. We mainly compare among other techniques, one technique for generative modeling and one for discriminant learning and show how they behave depending on the number of examples that the user is willing to provide to the system. We report results using the NIST TREC Video Corpus.

  14. Reinforcement learning based artificial immune classifier.

    PubMed

    Karakose, Mehmet

    2013-01-01

    One of the widely used methods for classification that is a decision-making process is artificial immune systems. Artificial immune systems based on natural immunity system can be successfully applied for classification, optimization, recognition, and learning in real-world problems. In this study, a reinforcement learning based artificial immune classifier is proposed as a new approach. This approach uses reinforcement learning to find better antibody with immune operators. The proposed new approach has many contributions according to other methods in the literature such as effectiveness, less memory cell, high accuracy, speed, and data adaptability. The performance of the proposed approach is demonstrated by simulation and experimental results using real data in Matlab and FPGA. Some benchmark data and remote image data are used for experimental results. The comparative results with supervised/unsupervised based artificial immune system, negative selection classifier, and resource limited artificial immune classifier are given to demonstrate the effectiveness of the proposed new method.

  15. Reinforcement Learning Based Artificial Immune Classifier

    PubMed Central

    Karakose, Mehmet

    2013-01-01

    One of the widely used methods for classification that is a decision-making process is artificial immune systems. Artificial immune systems based on natural immunity system can be successfully applied for classification, optimization, recognition, and learning in real-world problems. In this study, a reinforcement learning based artificial immune classifier is proposed as a new approach. This approach uses reinforcement learning to find better antibody with immune operators. The proposed new approach has many contributions according to other methods in the literature such as effectiveness, less memory cell, high accuracy, speed, and data adaptability. The performance of the proposed approach is demonstrated by simulation and experimental results using real data in Matlab and FPGA. Some benchmark data and remote image data are used for experimental results. The comparative results with supervised/unsupervised based artificial immune system, negative selection classifier, and resource limited artificial immune classifier are given to demonstrate the effectiveness of the proposed new method. PMID:23935424

  16. Renal Fibrosis mRNA Classifier: Validation in Experimental Lithium-Induced Interstitial Fibrosis in the Rat Kidney

    PubMed Central

    Marti, Hans-Peter; Leader, John; Leader, Catherine; Bedford, Jennifer

    2016-01-01

    Accurate diagnosis of fibrosis is of paramount clinical importance. A human fibrosis classifier based on metzincins and related genes (MARGS) was described previously. In this investigation, expression changes of MARGS genes were explored and evaluated to examine whether the MARGS-based algorithm has any diagnostic value in a rat model of lithium nephropathy. Male Wistar rats (n = 12) were divided into 2 groups (n = 6). One group was given a diet containing lithium (40 mmol/kg food for 7 days, followed by 60mmol/kg food for the rest of the experimental period), while a control group (n = 6) was fed a normal diet. After six months, animals were sacrificed and the renal cortex and medulla of both kidneys removed for analysis. Gene expression changes were analysed using 24 GeneChip® Affymetrix Rat Exon 1.0 ST arrays. Statistically relevant genes (p-value<0.05, fold change>1.5, t-test) were further examined. Matrix metalloproteinase-2 (MMP2), CD44, and nephroblastoma overexpressed gene (NOV) were overexpressed in the medulla and cortex of lithium-fed rats compared to the control group. TGFβ2 was overrepresented in the cortex of lithium-fed animals 1.5-fold, and 1.3-fold in the medulla of the same animals. In Gene Set Enrichment Analysis (GSEA), both the medulla and cortex of lithium-fed animals showed an enrichment of the MARGS, TGFβ network, and extracellular matrix (ECM) gene sets, while the cortex expression signature was enriched in additional fibrosis-related-genes and the medulla was also enriched in immune response pathways. Importantly, the MARGS-based fibrosis classifier was able to classify all samples correctly. Immunohistochemistry and qPCR confirmed the up-regulation of NOV, CD44, and TGFβ2. The MARGS classifier represents a cross-organ and cross-species classifier of fibrotic conditions and may help to design a test to diagnose and to monitor fibrosis. The results also provide evidence for a common pathway in the pathogenesis of fibrosis. PMID

  17. A survey of decision tree classifier methodology

    NASA Technical Reports Server (NTRS)

    Safavian, S. Rasoul; Landgrebe, David

    1990-01-01

    Decision Tree Classifiers (DTC's) are used successfully in many diverse areas such as radar signal classification, character recognition, remote sensing, medical diagnosis, expert systems, and speech recognition. Perhaps, the most important feature of DTC's is their capability to break down a complex decision-making process into a collection of simpler decisions, thus providing a solution which is often easier to interpret. A survey of current methods is presented for DTC designs and the various existing issue. After considering potential advantages of DTC's over single stage classifiers, subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed.

  18. A survey of decision tree classifier methodology

    NASA Technical Reports Server (NTRS)

    Safavian, S. R.; Landgrebe, David

    1991-01-01

    Decision tree classifiers (DTCs) are used successfully in many diverse areas such as radar signal classification, character recognition, remote sensing, medical diagnosis, expert systems, and speech recognition. Perhaps the most important feature of DTCs is their capability to break down a complex decision-making process into a collection of simpler decisions, thus providing a solution which is often easier to interpret. A survey of current methods is presented for DTC designs and the various existing issues. After considering potential advantages of DTCs over single-state classifiers, subjects of tree structure design, feature selection at each internal node, and decision and search strategies are discussed.

  19. Use of robust estimators in parametric classifiers

    NASA Technical Reports Server (NTRS)

    Safavian, S. Rasoul; Landgrebe, David A.

    1989-01-01

    The parametric approach to density estimation and classifier design is a well studied subject. The parametric approach is desirable because basically it reduces the problem of classifier design to that of estimating a few parameters for each of the pattern classes. The class parameters are usually estimated using maximum-likelihood (ML) estimators. ML estimators are, however, very sensitive to the presence of outliers. Several robust estimators of mean and covariance matrix and their effect on the probability of error in classification are examined. Comments are made about alpha-ranked (alpha-trimmed) estimators.

  20. Pulmonary nodule detection using a cascaded SVM classifier

    NASA Astrophysics Data System (ADS)

    Bergtholdt, Martin; Wiemker, Rafael; Klinder, Tobias

    2016-03-01

    Automatic detection of lung nodules from chest CT has been researched intensively over the last decades resulting also in several commercial products. However, solutions are adopted only slowly into daily clinical routine as many current CAD systems still potentially miss true nodules while at the same time generating too many false positives (FP). While many earlier approaches had to rely on rather few cases for development, larger databases become now available and can be used for algorithmic development. In this paper, we address the problem of lung nodule detection via a cascaded SVM classifier. The idea is to sequentially perform two classification tasks in order to select from an extremely large pool of potential candidates the few most likely ones. As the initial pool is allowed to contain thousands of candidates, very loose criteria could be applied during this pre-selection. In this way, the chances that a true nodule is falsely rejected as a candidate are reduced significantly. The final algorithm is trained and tested on the full LIDC/IDRI database. Comparison is done against two previously published CAD systems. Overall, the algorithm achieved sensitivity of 0.859 at 2.5 FP/volume where the other two achieved sensitivity values of 0.321 and 0.625, respectively. On low dose data sets, only slight increase in the number of FP/volume was observed, while the sensitivity was not affected.

  1. Spectral classifier design with ensemble classifiers and misclassification-rejection: application to elastic-scattering spectroscopy for detection of colonic neoplasia

    NASA Astrophysics Data System (ADS)

    Rodriguez-Diaz, Eladio; Castanon, David A.; Singh, Satish K.; Bigio, Irving J.

    2011-06-01

    Optical spectroscopy has shown potential as a real-time, in vivo, diagnostic tool for identifying neoplasia during endoscopy. We present the development of a diagnostic algorithm to classify elastic-scattering spectroscopy (ESS) spectra as either neoplastic or non-neoplastic. The algorithm is based on pattern recognition methods, including ensemble classifiers, in which members of the ensemble are trained on different regions of the ESS spectrum, and misclassification-rejection, where the algorithm identifies and refrains from classifying samples that are at higher risk of being misclassified. These ``rejected'' samples can be reexamined by simply repositioning the probe to obtain additional optical readings or ultimately by sending the polyp for histopathological assessment, as per standard practice. Prospective validation using separate training and testing sets result in a baseline performance of sensitivity = .83, specificity = .79, using the standard framework of feature extraction (principal component analysis) followed by classification (with linear support vector machines). With the developed algorithm, performance improves to Se ~ 0.90, Sp ~ 0.90, at a cost of rejecting 20-33% of the samples. These results are on par with a panel of expert pathologists. For colonoscopic prevention of colorectal cancer, our system could reduce biopsy risk and cost, obviate retrieval of non-neoplastic polyps, decrease procedure time, and improve assessment of cancer risk.

  2. Feature selection and classifier performance on diverse bio- logical datasets

    PubMed Central

    2014-01-01

    Background There is an ever-expanding range of technologies that generate very large numbers of biomarkers for research and clinical applications. Choosing the most informative biomarkers from a high-dimensional data set, combined with identifying the most reliable and accurate classification algorithms to use with that biomarker set, can be a daunting task. Existing surveys of feature selection and classification algorithms typically focus on a single data type, such as gene expression microarrays, and rarely explore the model's performance across multiple biological data types. Results This paper presents the results of a large scale empirical study whereby a large number of popular feature selection and classification algorithms are used to identify the tissue of origin for the NCI-60 cancer cell lines. A computational pipeline was implemented to maximize predictive accuracy of all models at all parameters on five different data types available for the NCI-60 cell lines. A validation experiment was conducted using external data in order to demonstrate robustness. Conclusions As expected, the data type and number of biomarkers have a significant effect on the performance of the predictive models. Although no model or data type uniformly outperforms the others across the entire range of tested numbers of markers, several clear trends are visible. At low numbers of biomarkers gene and protein expression data types are able to differentiate between cancer cell lines significantly better than the other three data types, namely SNP, array comparative genome hybridization (aCGH), and microRNA data. Interestingly, as the number of selected biomarkers increases best performing classifiers based on SNP data match or slightly outperform those based on gene and protein expression, while those based on aCGH and microRNA data continue to perform the worst. It is observed that one class of feature selection and classifier are consistently top performers across data types and

  3. A Hybrid Classifier for Automated Radiologic Diagnosis: Preliminary Results and Clinical Applications

    PubMed Central

    Herskovits, Edward

    1989-01-01

    We describe the design, implementation, and preliminary evaluation of a computer system to aid clinicians in the interpretation of cranial magnetic-resonance (MR) images. The system classifies normal and pathologic tissues in a test set of MR scans with high accuracy. It also provides a simple, rapid means whereby an unassisted expert may reliably label an image with his best judgment of its histologic composition, yielding a gold-standard image; this step facilitates objective evaluation of classifier performance. This system consists of a preprocessing module; a semiautomatic, reliable procedure for obtaining objective estimates of an expert's opinion of an image's tissue composition; a classification module based on a combination of the maximum-likelihood (ML) classifier and the ISODATA unsupervised-clustering algorithm; and an evaluation module based on confusion-matrix generation. The algorithms for classifier evaluation and gold-standard acquisition are advances over previous methods. Furthermore, the combination of a clustering algorithm and a statistical classifier provides advantages not found in systems using either method alone.

  4. Performance evaluation of various classifiers for color prediction of rice paddy plant leaf

    NASA Astrophysics Data System (ADS)

    Singh, Amandeep; Singh, Maninder Lal

    2016-11-01

    The food industry is one of the industries that uses machine vision for a nondestructive quality evaluation of the produce. These quality measuring systems and softwares are precalculated on the basis of various image-processing algorithms which generally use a particular type of classifier. These classifiers play a vital role in making the algorithms so intelligent that it can contribute its best while performing the said quality evaluations by translating the human perception into machine vision and hence machine learning. The crop of interest is rice, and the color of this crop indicates the health status of the plant. An enormous number of classifiers are available to solve the purpose of color prediction, but choosing the best among them is the focus of this paper. Performance of a total of 60 classifiers has been analyzed from the application point of view, and the results have been discussed. The motivation comes from the idea of providing a set of classifiers with excellent performance and implementing them on a single algorithm for the improvement of machine vision learning and, hence, associated applications.

  5. The classifier problem in Chinese aphasia.

    PubMed

    Tzeng, O J; Chen, S; Hung, D L

    1991-08-01

    In recent years, research on the relationship between brain organization and language processing has benefited tremendously from cross-linguistic comparisons of language disorders among different types of aphasic patients. Results from these cross-linguistic studies have shown that the same aphasic syndromes often look very different from one language to another, suggesting that language-specific knowledge is largely preserved in Broca's and Wernicke's aphasics. In this paper, Chinese aphasic patients were examined with respect to their (in)ability to use classifiers in a noun phrase. The Chinese language, in addition to its lack of verb conjugation and an absence of noun declension, is exceptional in yet another respect: articles, numerals, and other such modifiers cannot directly precede their associated nouns, there has to be an intervening morpheme called a classifier. The appropriate usage of nominal classifiers is considered to be one of the most difficult aspects of Chinese grammar. Our examination of Chinese aphasic patients revealed two essential points. First, Chinese aphasic patients experience difficulty in the production of nominal classifiers, committing a significant number of errors of omission and/or substitution. Second, two different kinds of substitution errors are observed in Broca's and Wernicke's patients, and the detailed analysis of the difference demands a rethinking of the distinction between agrammatism and paragrammatism. The result adds to a growing body of evidence suggesting that grammar is impaired in fluent as well as nonfluent aphasia.

  6. Large margin classifier-based ensemble tracking

    NASA Astrophysics Data System (ADS)

    Wang, Yuru; Liu, Qiaoyuan; Yin, Minghao; Wang, ShengSheng

    2016-07-01

    In recent years, many studies consider visual tracking as a two-class classification problem. The key problem is to construct a classifier with sufficient accuracy in distinguishing the target from its background and sufficient generalize ability in handling new frames. However, the variable tracking conditions challenges the existing methods. The difficulty mainly comes from the confused boundary between the foreground and background. This paper handles this difficulty by generalizing the classifier's learning step. By introducing the distribution data of samples, the classifier learns more essential characteristics in discriminating the two classes. Specifically, the samples are represented in a multiscale visual model. For features with different scales, several large margin distribution machine (LDMs) with adaptive kernels are combined in a Baysian way as a strong classifier. Where, in order to improve the accuracy and generalization ability, not only the margin distance but also the sample distribution is optimized in the learning step. Comprehensive experiments are performed on several challenging video sequences, through parameter analysis and field comparison, the proposed LDM combined ensemble tracker is demonstrated to perform with sufficient accuracy and generalize ability in handling various typical tracking difficulties.

  7. Performance of a 20-target MSE classifier

    NASA Astrophysics Data System (ADS)

    Novak, Leslie M.; Owirka, Gregory J.; Brower, William S.

    1998-08-01

    MIT Lincoln Laboratory is responsible for developing the ATR system for the DARPA/DARO/NIMA/OSD-sponsored SAIP program; the baseline ATR system recognizes 10 GOB targets; the enhanced version of SAIP requires the ATR system to recognize 20 GOB targets. This paper compares ATR performance results for 10- and 20-target MSE classifiers using high-resolution SAR imagery.

  8. Classifying and quantifying basins of attraction

    SciTech Connect

    Sprott, J. C.; Xiong, Anda

    2015-08-15

    A scheme is proposed to classify the basins for attractors of dynamical systems in arbitrary dimensions. There are four basic classes depending on their size and extent, and each class can be further quantified to facilitate comparisons. The calculation uses a Monte Carlo method and is applied to numerous common dissipative chaotic maps and flows in various dimensions.

  9. Performance Evaluation of a Semantic Perception Classifier

    DTIC Science & Technology

    2013-09-01

    Performance Evaluation of a Semantic Perception Classifier by Craig Lennon, Barry Bodt, Marshal Childers, Rick Camden, Arne Suppe, Luis...Camden and Nicoleta Florea Engility Corporation Luis Navarro-Serment and Arne Suppe Carnegie Mellon University...Lennon, Barry Bodt, Marshal Childers, Rick Camden,* Arne Suppe, † Luis Navarro-Serment, † and Nicoleta Florea* 5d. PROJECT NUMBER 5e. TASK

  10. 32 CFR 148.2 - Classified programs.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 1 2010-07-01 2010-07-01 false Classified programs. 148.2 Section 148.2 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE PERSONNEL, MILITARY AND CIVILIAN NATIONAL POLICY AND IMPLEMENTATION OF RECIPROCITY OF FACILITIES National Policy on Reciprocity of Use...

  11. 32 CFR 148.2 - Classified programs.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 32 National Defense 1 2011-07-01 2011-07-01 false Classified programs. 148.2 Section 148.2 National Defense Department of Defense OFFICE OF THE SECRETARY OF DEFENSE PERSONNEL, MILITARY AND CIVILIAN NATIONAL POLICY AND IMPLEMENTATION OF RECIPROCITY OF FACILITIES National Policy on Reciprocity of Use...

  12. Shape and Function in Hmong Classifier Choices

    ERIC Educational Resources Information Center

    Sakuragi, Toshiyuki; Fuller, Judith W.

    2013-01-01

    This study examined classifiers in the Hmong language with a particular focus on gaining insights into the underlying cognitive process of categorization. Forty-three Hmong speakers participated in three experiments. In the first experiment, designed to verify the previously postulated configurational (saliently one-dimensional, saliently…

  13. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and Declassification of National Security Information § 1312.4 Classified designations. (a) Except as provided by the Atomic Energy Act of 1954, as amended, (42 U.S.C. 2011) or the National Security Act of 1947, as...

  14. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and Declassification of National Security Information § 1312.4 Classified designations. (a) Except as provided by the Atomic Energy Act of 1954, as amended, (42 U.S.C. 2011) or the National Security Act of 1947, as...

  15. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and Declassification of National Security Information § 1312.4 Classified designations. (a) Except as provided by the Atomic Energy Act of 1954, as amended, (42 U.S.C. 2011) or the National Security Act of 1947, as...

  16. 5 CFR 1312.4 - Classified designations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and Declassification of National Security Information § 1312.4 Classified designations. (a) Except as provided by the Atomic Energy Act of 1954, as amended, (42 U.S.C. 2011) or the National Security Act of 1947, as...

  17. 32 CFR 651.13 - Classified actions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ENVIRONMENTAL ANALYSIS OF ARMY ACTIONS (AR 200-2) National Environmental Policy Act and the Decision Process..., AR 380-5 (Department of the Army Information Security Program) will be followed. (b) Classification... makers in accordance with AR 380-5. (d) When classified information is such an integral part of...

  18. 32 CFR 651.13 - Classified actions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ENVIRONMENTAL ANALYSIS OF ARMY ACTIONS (AR 200-2) National Environmental Policy Act and the Decision Process..., AR 380-5 (Department of the Army Information Security Program) will be followed. (b) Classification... makers in accordance with AR 380-5. (d) When classified information is such an integral part of...

  19. 32 CFR 651.13 - Classified actions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... ENVIRONMENTAL ANALYSIS OF ARMY ACTIONS (AR 200-2) National Environmental Policy Act and the Decision Process..., AR 380-5 (Department of the Army Information Security Program) will be followed. (b) Classification... makers in accordance with AR 380-5. (d) When classified information is such an integral part of...

  20. 32 CFR 651.13 - Classified actions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ENVIRONMENTAL ANALYSIS OF ARMY ACTIONS (AR 200-2) National Environmental Policy Act and the Decision Process..., AR 380-5 (Department of the Army Information Security Program) will be followed. (b) Classification... makers in accordance with AR 380-5. (d) When classified information is such an integral part of...

  1. 32 CFR 651.13 - Classified actions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ENVIRONMENTAL ANALYSIS OF ARMY ACTIONS (AR 200-2) National Environmental Policy Act and the Decision Process..., AR 380-5 (Department of the Army Information Security Program) will be followed. (b) Classification... makers in accordance with AR 380-5. (d) When classified information is such an integral part of...

  2. MScanner: a classifier for retrieving Medline citations

    PubMed Central

    Poulter, Graham L; Rubin, Daniel L; Altman, Russ B; Seoighe, Cathal

    2008-01-01

    Background Keyword searching through PubMed and other systems is the standard means of retrieving information from Medline. However, ad-hoc retrieval systems do not meet all of the needs of databases that curate information from literature, or of text miners developing a corpus on a topic that has many terms indicative of relevance. Several databases have developed supervised learning methods that operate on a filtered subset of Medline, to classify Medline records so that fewer articles have to be manually reviewed for relevance. A few studies have considered generalisation of Medline classification to operate on the entire Medline database in a non-domain-specific manner, but existing applications lack speed, available implementations, or a means to measure performance in new domains. Results MScanner is an implementation of a Bayesian classifier that provides a simple web interface for submitting a corpus of relevant training examples in the form of PubMed IDs and returning results ranked by decreasing probability of relevance. For maximum speed it uses the Medical Subject Headings (MeSH) and journal of publication as a concise document representation, and takes roughly 90 seconds to return results against the 16 million records in Medline. The web interface provides interactive exploration of the results, and cross validated performance evaluation on the relevant input against a random subset of Medline. We describe the classifier implementation, cross validate it on three domain-specific topics, and compare its performance to that of an expert PubMed query for a complex topic. In cross validation on the three sample topics against 100,000 random articles, the classifier achieved excellent separation of relevant and irrelevant article score distributions, ROC areas between 0.97 and 0.99, and averaged precision between 0.69 and 0.92. Conclusion MScanner is an effective non-domain-specific classifier that operates on the entire Medline database, and is suited to

  3. Comparison of wheat classification accuracy using different classifiers of the image-100 system

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Chen, S. C.; Moreira, M. A.; Delima, A. M.

    1981-01-01

    Classification results using single-cell and multi-cell signature acquisition options, a point-by-point Gaussian maximum-likelihood classifier, and K-means clustering of the Image-100 system are presented. Conclusions reached are that: a better indication of correct classification can be provided by using a test area which contains various cover types of the study area; classification accuracy should be evaluated considering both the percentages of correct classification and error of commission; supervised classification approaches are better than K-means clustering; Gaussian distribution maximum likelihood classifier is better than Single-cell and Multi-cell Signature Acquisition Options of the Image-100 system; and in order to obtain a high classification accuracy in a large and heterogeneous crop area, using Gaussian maximum-likelihood classifier, homogeneous spectral subclasses of the study crop should be created to derive training statistics.

  4. A multiple classifier system based on Ant-Colony Optimization for Hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Tang, Ke; Xie, Li; Li, Guangyao

    2017-01-01

    Hyperspectral images which hold a large quantity of land information enables image classification. Traditional classification methods usually works on multispectral images. However, the high dimensionality in feature space influences the accuracy while using these classification algorithms, such as statistical classifiers or decision trees. This paper proposes a multiple classifier system (MCS) based on ant colony optimization (ACO) algorithm to improve the classification ability. ACO method has been implemented on multispectral images in researches, but seldom to hyperspectral images. In order to overcome the limitation of ACO method on dealing with high dimensionality, MCS is introduced to combine the outputs of each single ACO classifier based on the credibility of rules. Mutual information is applied to discretizing features from the data set and provides the criterion of band selection and band grouping algorithms. The performance of the proposed method is validated with ROSIS Pavia data set, and compared to k-nearest neighbour (KNN) algorithm. Experimental results prove that the proposed method is feasible to classify hyperspectral images.

  5. Supervised segmentation of MRI brain images using combination of multiple classifiers.

    PubMed

    Ahmadvand, Ali; Sharififar, Mohammad; Daliri, Mohammad Reza

    2015-06-01

    Segmentation of different tissues is one of the initial and most critical tasks in different aspects of medical image processing. Manual segmentation of brain images resulted from magnetic resonance imaging is time consuming, so automatic image segmentation is widely used in this area. Ensemble based algorithms are very reliable and generalized methods for classification. In this paper, a supervised method named dynamic classifier selection-dynamic local training local tanimoto index, which is a member of combination of multiple classifiers (CMCs) methods is proposed. The proposed method uses dynamic local training sets instead of a full statics one and also it change the classifier rank criterion properly for brain tissue classification. Selection policy for combining the different decisions is implemented here and the K-nearest neighbor algorithm is used to find the best local classifier. Experimental results show that the proposed method can classify the real datasets of the internet brain segmentation repository better than all single classifiers in ensemble and produces significantly improvement on other CMCs methods.

  6. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  7. Evolving Coevolutionary Classifiers Under Large Attribute Spaces

    NASA Astrophysics Data System (ADS)

    Doucette, John; Lichodzijewski, Peter; Heywood, Malcolm

    Model-building under the supervised learning domain potentially face a dual learning problem of identifying both the parameters of the model and the subset of (domain) attributes necessary to support the model, thus using an embedded as opposed to wrapper or filter based design. Genetic Programming (GP) has always addressed this dual problem, however, further implicit assumptions are made which potentially increase the complexity of the resulting solutions. In this work we are specifically interested in the case of classification under very large attribute spaces. As such it might be expected that multiple independent/ overlapping attribute subspaces support the mapping to class labels; whereas GP approaches to classification generally assume a single binary classifier per class, forcing the model to provide a solution in terms of a single attribute subspace and single mapping to class labels. Supporting the more general goal is considered as a requirement for identifying a 'team' of classifiers with non-overlapping classifier behaviors, in which each classifier responds to different subsets of exemplars. Moreover, the subsets of attributes associated with each team member might utilize a unique 'subspace' of attributes. This work investigates the utility of coevolutionary model building for the case of classification problems with attribute vectors consisting of 650 to 100,000 dimensions. The resulting team based coevolutionary evolutionary method-Symbiotic Bid-based (SBB) GP-is compared to alternative embedded classifier approaches of C4.5 and Maximum Entropy Classification (MaxEnt). SSB solutions demonstrate up to an order of magnitude lower attribute count relative to C4.5 and up to two orders of magnitude lower attribute count than MaxEnt while retaining comparable or better classification performance. Moreover, relative to the attribute count of individual models participating within a team, no more than six attributes are ever utilized; adding a further

  8. Learning with the ratchet algorithm.

    SciTech Connect

    Hush, D. R.; Scovel, James C.

    2003-01-01

    This paper presents a randomized algorithm called Ratchet that asymptotically minimizes (with probability 1) functions that satisfy a positive-linear-dependent (PLD) property. We establish the PLD property and a corresponding realization of Ratchet for a generalized loss criterion for both linear machines and linear classifiers. We describe several learning criteria that can be obtained as special cases of this generalized loss criterion, e.g. classification error, classification loss and weighted classification error. We also establish the PLD property and a corresponding realization of Ratchet for the Neyman-Pearson criterion for linear classifiers. Finally we show how, for linear classifiers, the Ratchet algorithm can be derived as a modification of the Pocket algorithm.

  9. Automated Confocal Microscope Bias Correction

    NASA Astrophysics Data System (ADS)

    Dorval, Thierry; Genovesio, Auguste

    2006-10-01

    Illumination artifacts systematically occur in 2D cross-section confocal microscopy imaging . These bias can strongly corrupt an higher level image processing such as a segmentation, a fluorescence evaluation or even a pattern extraction/recognition. This paper presents a new fully automated bias correction methodology based on large image database preprocessing. This method is very appropriate to the High Content Screening (HCS), method dedicated to drugs discovery. Our method assumes that the amount of pictures available is large enough to allow a reliable statistical computation of an average bias image. A relevant segmentation evaluation protocol and experimental results validate our correction algorithm by outperforming object extraction on non corrupted images.

  10. Motion correction in MRI of the brain

    PubMed Central

    Godenschweger, F; Kägebein, U; Stucht, D; Yarach, U; Sciarra, A; Yakupov, R; Lüsebrink, F; Schulze, P; Speck, O

    2016-01-01

    Subject motion in MRI is a relevant problem in the daily clinical routine as well as in scientific studies. Since the beginning of clinical use of MRI, many research groups have developed methods to suppress or correct motion artefacts. This review focuses on rigid body motion correction of head and brain MRI and its application in diagnosis and research. It explains the sources and types of motion and related artefacts, classifies and describes existing techniques for motion detection, compensation and correction and lists established and experimental approaches. Retrospective motion correction modifies the MR image data during the reconstruction, while prospective motion correction performs an adaptive update of the data acquisition. Differences, benefits and drawbacks of different motion correction methods are discussed. PMID:26864183

  11. Motion correction in MRI of the brain

    NASA Astrophysics Data System (ADS)

    Godenschweger, F.; Kägebein, U.; Stucht, D.; Yarach, U.; Sciarra, A.; Yakupov, R.; Lüsebrink, F.; Schulze, P.; Speck, O.

    2016-03-01

    Subject motion in MRI is a relevant problem in the daily clinical routine as well as in scientific studies. Since the beginning of clinical use of MRI, many research groups have developed methods to suppress or correct motion artefacts. This review focuses on rigid body motion correction of head and brain MRI and its application in diagnosis and research. It explains the sources and types of motion and related artefacts, classifies and describes existing techniques for motion detection, compensation and correction and lists established and experimental approaches. Retrospective motion correction modifies the MR image data during the reconstruction, while prospective motion correction performs an adaptive update of the data acquisition. Differences, benefits and drawbacks of different motion correction methods are discussed.

  12. 78 FR 75449 - Miscellaneous Corrections; Corrections

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-12

    ..., 50, 52, and 70 RIN 3150-AJ23 Miscellaneous Corrections; Corrections AGENCY: Nuclear Regulatory... final rule in the Federal Register on June 7, 2013, to make miscellaneous corrections to its regulations... miscellaneous corrections to its regulations in chapter I of Title 10 of the Code of Federal Regulations (10...

  13. A New Qualitative Typology to Classify Treading Water Movement Patterns

    PubMed Central

    Schnitzler, Christophe; Button, Chris; Croft, James L.

    2015-01-01

    This study proposes a new qualitative typology that can be used to classify learners treading water into different skill-based categories. To establish the typology, 38 participants were videotaped while treading water and their movement patterns were qualitatively analyzed by two experienced biomechanists. 13 sport science students were then asked to classify eight of the original participants after watching a brief tutorial video about how to use the typology. To examine intra-rater consistency, each participant was presented in a random order three times. Generalizability (G) and Decision (D) studies were performed to estimate the importance variance due to rater, occasion, video and the interactions between them, and to determine the reliability of the raters’ answers. A typology of five general classes of coordination was defined amongst the original 38 participants. The G-study showed an accurate and reliable assessment of different pattern type, with a percentage of correct classification of 80.1%, an overall Fleiss’ Kappa coefficient K = 0.6, and an overall generalizability φ coefficient of 0.99. This study showed that the new typology proposed to characterize the behaviour of individuals treading water was both accurate and highly reliable. Movement pattern classification using the typology might help practitioners distinguish between different skill-based behaviours and potentially guide instruction of key aquatic survival skills. Key points Treading water behavioral adaptation can be classified along two dimensions: the type of force created (drag vs lift), and the frequency of the force impulses Based on these concepts, 9 behavioral types can be identified, providing the basis for a typology Provided with macroscopic descriptors (movements of the limb relative to the water, and synchronous vs asynchronous movements), analysts can characterize behavioral type accurately and reliably. PMID:26336339

  14. Use of genetic algorithm for the selection of EEG features

    NASA Astrophysics Data System (ADS)

    Asvestas, P.; Korda, A.; Kostopoulos, S.; Karanasiou, I.; Ouzounoglou, A.; Sidiropoulos, K.; Ventouras, E.; Matsopoulos, G.

    2015-09-01

    Genetic Algorithm (GA) is a popular optimization technique that can detect the global optimum of a multivariable function containing several local optima. GA has been widely used in the field of biomedical informatics, especially in the context of designing decision support systems that classify biomedical signals or images into classes of interest. The aim of this paper is to present a methodology, based on GA, for the selection of the optimal subset of features that can be used for the efficient classification of Event Related Potentials (ERPs), which are recorded during the observation of correct or incorrect actions. In our experiment, ERP recordings were acquired from sixteen (16) healthy volunteers who observed correct or incorrect actions of other subjects. The brain electrical activity was recorded at 47 locations on the scalp. The GA was formulated as a combinatorial optimizer for the selection of the combination of electrodes that maximizes the performance of the Fuzzy C Means (FCM) classification algorithm. In particular, during the evolution of the GA, for each candidate combination of electrodes, the well-known (Σ, Φ, Ω) features were calculated and were evaluated by means of the FCM method. The proposed methodology provided a combination of 8 electrodes, with classification accuracy 93.8%. Thus, GA can be the basis for the selection of features that discriminate ERP recordings of observations of correct or incorrect actions.

  15. An Ocular Protein Triad Can Classify Four Complex Retinal Diseases

    PubMed Central

    Kuiper, J. J. W.; Beretta, L.; Nierkens, S.; van Leeuwen, R.; ten Dam-van Loon, N. H.; Ossewaarde-van Norel, J.; Bartels, M. C.; de Groot-Mijnes, J. D. F.; Schellekens, P.; de Boer, J. H.; Radstake, T. R. D. J.

    2017-01-01

    Retinal diseases generally are vision-threatening conditions that warrant appropriate clinical decision-making which currently solely dependents upon extensive clinical screening by specialized ophthalmologists. In the era where molecular assessment has improved dramatically, we aimed at the identification of biomarkers in 175 ocular fluids to classify four archetypical ocular conditions affecting the retina (age-related macular degeneration, idiopathic non-infectious uveitis, primary vitreoretinal lymphoma, and rhegmatogenous retinal detachment) with one single test. Unsupervised clustering of ocular proteins revealed a classification strikingly similar to the clinical phenotypes of each disease group studied. We developed and independently validated a parsimonious model based merely on three proteins; interleukin (IL)-10, IL-21, and angiotensin converting enzyme (ACE) that could correctly classify patients with an overall accuracy, sensitivity and specificity of respectively, 86.7%, 79.4% and 92.5%. Here, we provide proof-of-concept for molecular profiling as a diagnostic aid for ophthalmologists in the care for patients with retinal conditions. PMID:28128370

  16. An Ocular Protein Triad Can Classify Four Complex Retinal Diseases

    NASA Astrophysics Data System (ADS)

    Kuiper, J. J. W.; Beretta, L.; Nierkens, S.; van Leeuwen, R.; Ten Dam-van Loon, N. H.; Ossewaarde-van Norel, J.; Bartels, M. C.; de Groot-Mijnes, J. D. F.; Schellekens, P.; de Boer, J. H.; Radstake, T. R. D. J.

    2017-01-01

    Retinal diseases generally are vision-threatening conditions that warrant appropriate clinical decision-making which currently solely dependents upon extensive clinical screening by specialized ophthalmologists. In the era where molecular assessment has improved dramatically, we aimed at the identification of biomarkers in 175 ocular fluids to classify four archetypical ocular conditions affecting the retina (age-related macular degeneration, idiopathic non-infectious uveitis, primary vitreoretinal lymphoma, and rhegmatogenous retinal detachment) with one single test. Unsupervised clustering of ocular proteins revealed a classification strikingly similar to the clinical phenotypes of each disease group studied. We developed and independently validated a parsimonious model based merely on three proteins; interleukin (IL)-10, IL-21, and angiotensin converting enzyme (ACE) that could correctly classify patients with an overall accuracy, sensitivity and specificity of respectively, 86.7%, 79.4% and 92.5%. Here, we provide proof-of-concept for molecular profiling as a diagnostic aid for ophthalmologists in the care for patients with retinal conditions.

  17. Integrating language models into classifiers for BCI communication: a review

    NASA Astrophysics Data System (ADS)

    Speier, W.; Arnold, C.; Pouratian, N.

    2016-06-01

    Objective. The present review systematically examines the integration of language models to improve classifier performance in brain-computer interface (BCI) communication systems. Approach. The domain of natural language has been studied extensively in linguistics and has been used in the natural language processing field in applications including information extraction, machine translation, and speech recognition. While these methods have been used for years in traditional augmentative and assistive communication devices, information about the output domain has largely been ignored in BCI communication systems. Over the last few years, BCI communication systems have started to leverage this information through the inclusion of language models. Main results. Although this movement began only recently, studies have already shown the potential of language integration in BCI communication and it has become a growing field in BCI research. BCI communication systems using language models in their classifiers have progressed down several parallel paths, including: word completion; signal classification; integration of process models; dynamic stopping; unsupervised learning; error correction; and evaluation. Significance. Each of these methods have shown significant progress, but have largely been addressed separately. Combining these methods could use the full potential of language model, yielding further performance improvements. This integration should be a priority as the field works to create a BCI system that meets the needs of the amyotrophic lateral sclerosis population.

  18. Conformational Features of Topologically Classified RNA Secondary Structures

    PubMed Central

    Chiu, Jimmy Ka Ho; Chen, Yi-Ping Phoebe

    2012-01-01

    Background Current RNA secondary structure prediction approaches predict prevalent pseudoknots such as the H-pseudoknot and kissing hairpin. The number of possible structures increases drastically when more complex pseudoknots are considered, thus leading to computational limitations. On the other hand, the enormous population of possible structures means not all of them appear in real RNA molecules. Therefore, it is of interest to understand how many of them really exist and the reasons for their preferred existence over the others, as any new findings revealed by this study might enhance the capability of future structure prediction algorithms for more accurate prediction of complex pseudoknots. Methodology/Principal Findings A novel algorithm was devised to estimate the exact number of structural possibilities for a pseudoknot constructed with a specified number of base pair stems. Then, topological classification was applied to classify RNA pseudoknotted structures from data in the RNA STRAND database. By showing the vast possibilities and the real population, it is clear that most of these plausible complex pseudoknots are not observed. Moreover, from these classified motifs that exist in nature, some features were identified for further investigation. It was found that some features are related to helical stacking. Other features are still left open to discover underlying tertiary interactions. Conclusions Results from topological classification suggest that complex pseudoknots are usually some well-known motifs that are themselves complex or the interaction results of some special motifs. Heuristics can be proposed to predict the essential parts of these complex motifs, even if the required thermodynamic parameters are currently unknown. PMID:22792195

  19. Learning accurate and concise naïve Bayes classifiers from attribute value taxonomies and data

    PubMed Central

    Kang, D.-K.; Silvescu, A.; Honavar, V.

    2009-01-01

    In many application domains, there is a need for learning algorithms that can effectively exploit attribute value taxonomies (AVT)—hierarchical groupings of attribute values—to learn compact, comprehensible and accurate classifiers from data—including data that are partially specified. This paper describes AVT-NBL, a natural generalization of the naïve Bayes learner (NBL), for learning classifiers from AVT and data. Our experimental results show that AVT-NBL is able to generate classifiers that are substantially more compact and more accurate than those produced by NBL on a broad range of data sets with different percentages of partially specified values. We also show that AVT-NBL is more efficient in its use of training data: AVT-NBL produces classifiers that outperform those produced by NBL using substantially fewer training examples. PMID:20351793

  20. Development of an underwater target classifier using target specific features

    NASA Astrophysics Data System (ADS)

    Supriya, M. H.; Pillai, P. R. Saseendran

    2003-04-01

    In Sonar, the detection and estimation functions are performed by signal processors, which involve the computation of various statistics, for enhancing the overall performance of the system. This also takes into account all the undesirable propagation effects caused by the underwater channel. Underwater targets can be classified by using certain target specific features such as target strength, target dynamics, and the signatures of the noise generated by the targets. Rough identification of the targets is carried out with target strength values at known aspects while for precise identification, classification clues from target dynamics and target signatures are generated. Databases for the engine noise spectra of various underwater targets, propeller noises, machinery noises and cavitation noises, speed-noise characteristics, etc., have been developed. The signal energy estimated within a finite-time interval is compared with the earlier detection/estimation decisions, which are stored in the target data record and the relevant target data are updated. The algorithm for identification of target from the most matching signature patterns in the database will generate the classification clues, which will help in target identification. Salient highlights of an underwater target classifier using the above-discussed target specific features are presented in this paper.

  1. The Cartan algorithm in five dimensions

    NASA Astrophysics Data System (ADS)

    McNutt, D. D.; Coley, A. A.; Forget, A.

    2017-03-01

    In this paper, we introduce an algorithm to determine the equivalence of five dimensional spacetimes, which generalizes the Karlhede algorithm for four dimensional general relativity. As an alternative to the Petrov type classification, we employ the alignment classification to algebraically classify the Weyl tensor. To illustrate the algorithm, we discuss three examples: the singly rotating Myers-Perry solution, the Kerr (Anti-) de Sitter solution, and the rotating black ring solution. We briefly discuss some applications of the Cartan algorithm in five dimensions.

  2. Letter identification and the neural image classifier.

    PubMed

    Watson, Andrew B; Ahumada, Albert J

    2015-02-12

    Letter identification is an important visual task for both practical and theoretical reasons. To extend and test existing models, we have reviewed published data for contrast sensitivity for letter identification as a function of size and have also collected new data. Contrast sensitivity increases rapidly from the acuity limit but slows and asymptotes at a symbol size of about 1 degree. We recast these data in terms of contrast difference energy: the average of the squared distances between the letter images and the average letter image. In terms of sensitivity to contrast difference energy, and thus visual efficiency, there is a peak around ¼ degree, followed by a marked decline at larger sizes. These results are explained by a Neural Image Classifier model that includes optical filtering and retinal neural filtering, sampling, and noise, followed by an optimal classifier. As letters are enlarged, sensitivity declines because of the increasing size and spacing of the midget retinal ganglion cell receptive fields in the periphery.

  3. Comparing cosmic web classifiers using information theory

    NASA Astrophysics Data System (ADS)

    Leclercq, Florent; Lavaux, Guilhem; Jasche, Jens; Wandelt, Benjamin

    2016-08-01

    We introduce a decision scheme for optimally choosing a classifier, which segments the cosmic web into different structure types (voids, sheets, filaments, and clusters). Our framework, based on information theory, accounts for the design aims of different classes of possible applications: (i) parameter inference, (ii) model selection, and (iii) prediction of new observations. As an illustration, we use cosmographic maps of web-types in the Sloan Digital Sky Survey to assess the relative performance of the classifiers T-WEB, DIVA and ORIGAMI for: (i) analyzing the morphology of the cosmic web, (ii) discriminating dark energy models, and (iii) predicting galaxy colors. Our study substantiates a data-supported connection between cosmic web analysis and information theory, and paves the path towards principled design of analysis procedures for the next generation of galaxy surveys. We have made the cosmic web maps, galaxy catalog, and analysis scripts used in this work publicly available.

  4. Classifying objects in LWIR imagery via CNNs

    NASA Astrophysics Data System (ADS)

    Rodger, Iain; Connor, Barry; Robertson, Neil M.

    2016-10-01

    The aim of the presented work is to demonstrate enhanced target recognition and improved false alarm rates for a mid to long range detection system, utilising a Long Wave Infrared (LWIR) sensor. By exploiting high quality thermal image data and recent techniques in machine learning, the system can provide automatic target recognition capabilities. A Convolutional Neural Network (CNN) is trained and the classifier achieves an overall accuracy of > 95% for 6 object classes related to land defence. While the highly accurate CNN struggles to recognise long range target classes, due to low signal quality, robust target discrimination is achieved for challenging candidates. The overall performance of the methodology presented is assessed using human ground truth information, generating classifier evaluation metrics for thermal image sequences.

  5. Classifying Land Cover Using Spectral Signature

    NASA Astrophysics Data System (ADS)

    Alawiye, F. S.

    2012-12-01

    Studying land cover has become increasingly important as countries try to overcome the destruction of wetlands; its impact on local climate due to seasonal variation, radiation balance, and deteriorating environmental quality. In this investigation, we have been studying the spectral signatures of the Jamaica Bay wetland area based on remotely sensed satellite input data from LANDSAT TM and ASTER. We applied various remote sensing techniques to generate classified land cover output maps. Our classifiers relied on input from both the remote sensing and in-situ spectral field data. Based upon spectral separability and data collected in the field, a supervised and unsupervised classification was carried out. First results suggest good agreement between the land cover units mapped and those observed in the field.

  6. Classification Studies in an Advanced Air Classifier

    NASA Astrophysics Data System (ADS)

    Routray, Sunita; Bhima Rao, R.

    2016-10-01

    In the present paper, experiments are carried out using VSK separator which is an advanced air classifier to recover heavy minerals from beach sand. In classification experiments the cage wheel speed and the feed rate are set and the material is fed to the air cyclone and split into fine and coarse particles which are collected in separate bags. The size distribution of each fraction was measured by sieve analysis. A model is developed to predict the performance of the air classifier. The objective of the present model is to predict the grade efficiency curve for a given set of operating parameters such as cage wheel speed and feed rate. The overall experimental data with all variables studied in this investigation is fitted to several models. It is found that the present model is fitting good to the logistic model.

  7. Semantic Features for Classifying Referring Search Terms

    SciTech Connect

    May, Chandler J.; Henry, Michael J.; McGrath, Liam R.; Bell, Eric B.; Marshall, Eric J.; Gregory, Michelle L.

    2012-05-11

    When an internet user clicks on a result in a search engine, a request is submitted to the destination web server that includes a referrer field containing the search terms given by the user. Using this information, website owners can analyze the search terms leading to their websites to better understand their visitors needs. This work explores some of the features that can be used for classification-based analysis of such referring search terms. We present initial results for the example task of classifying HTTP requests countries of origin. A system that can accurately predict the country of origin from query text may be a valuable complement to IP lookup methods which are susceptible to the obfuscation of dereferrers or proxies. We suggest that the addition of semantic features improves classifier performance in this example application. We begin by looking at related work and presenting our approach. After describing initial experiments and results, we discuss paths forward for this work.

  8. Support vector machine as a binary classifier for automated object detection in remotely sensed data

    NASA Astrophysics Data System (ADS)

    Wardaya, P. D.

    2014-02-01

    In the present paper, author proposes the application of Support Vector Machine (SVM) for the analysis of satellite imagery. One of the advantages of SVM is that, with limited training data, it may generate comparable or even better results than the other methods. The SVM algorithm is used for automated object detection and characterization. Specifically, the SVM is applied in its basic nature as a binary classifier where it classifies two classes namely, object and background. The algorithm aims at effectively detecting an object from its background with the minimum training data. The synthetic image containing noises is used for algorithm testing. Furthermore, it is implemented to perform remote sensing image analysis such as identification of Island vegetation, water body, and oil spill from the satellite imagery. It is indicated that SVM provides the fast and accurate analysis with the acceptable result.

  9. Double Ramp Loss Based Reject Option Classifier

    DTIC Science & Technology

    2015-05-22

    choose 10% of these points uniformly at random and flip their labels. 2. Ionosphere Dataset [2] : This dataset describes the problem of discrimi- nating...good versus bad radars based on whether they send some useful infor- mation about the Ionosphere . There are 34 variables and 351 observations. 3... Ionosphere dataset (nonlinear classifiers using RBF kernel for both the approaches) d LDR (C = 2, γ = 0.125) LDH (C = 16, γ = 0.125) Risk RR Acc(unrej

  10. Characterizing and Classifying Acoustical Ambient Sound Profiles

    DTIC Science & Technology

    2015-03-26

    of sound . The value for the speed of sound varies depending on the medium which the sound wave travels through as well as the temperature and...Characterizing and Classifying Acoustical Ambient Sound Profiles THESIS MARCH 2015 Paul T. Gaski, Second Lieutenant, USAF AFIT-ENS-MS-15-M-122... SOUND PROFILES THESIS Presented to the Faculty Department of Operational Sciences Graduate School of Engineering and Management Air Force Institute of

  11. Autofocus correction of excessive migration in synthetic aperture radar images.

    SciTech Connect

    Doerry, Armin Walter

    2004-09-01

    When residual range migration due to either real or apparent motion errors exceeds the range resolution, conventional autofocus algorithms fail. A new migration-correction autofocus algorithm has been developed that estimates the migration and applies phase and frequency corrections to properly focus the image.

  12. Unsupervised Pattern Classifier for Abnormality-Scaling of Vibration Features for Helicopter Gearbox Fault Diagnosis

    NASA Technical Reports Server (NTRS)

    Jammu, Vinay B.; Danai, Kourosh; Lewicki, David G.

    1996-01-01

    A new unsupervised pattern classifier is introduced for on-line detection of abnormality in features of vibration that are used for fault diagnosis of helicopter gearboxes. This classifier compares vibration features with their respective normal values and assigns them a value in (0, 1) to reflect their degree of abnormality. Therefore, the salient feature of this classifier is that it does not require feature values associated with faulty cases to identify abnormality. In order to cope with noise and changes in the operating conditions, an adaptation algorithm is incorporated that continually updates the normal values of the features. The proposed classifier is tested using experimental vibration features obtained from an OH-58A main rotor gearbox. The overall performance of this classifier is then evaluated by integrating the abnormality-scaled features for detection of faults. The fault detection results indicate that the performance of this classifier is comparable to the leading unsupervised neural networks: Kohonen's Feature Mapping and Adaptive Resonance Theory (AR72). This is significant considering that the independence of this classifier from fault-related features makes it uniquely suited to abnormality-scaling of vibration features for fault diagnosis.

  13. Quantum error correction for beginners.

    PubMed

    Devitt, Simon J; Munro, William J; Nemoto, Kae

    2013-07-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.

  14. Classifying lipoproteins based on their polar profiles.

    PubMed

    Polanco, Carlos; Castañón-González, Jorge Alberto; Buhse, Thomas; Uversky, Vladimir N; Amkie, Rafael Zonana

    2016-01-01

    The lipoproteins are an important group of cargo proteins known for their unique capability to transport lipids. By applying the Polarity index algorithm, which has a metric that only considers the polar profile of the linear sequences of the lipoprotein group, we obtained an analytical and structural differentiation of all the lipoproteins found in UniProt Database. Also, the functional groups of lipoproteins, and particularly of the set of lipoproteins relevant to atherosclerosis, were analyzed with the same method to reveal their structural preference, and the results of Polarity index analysis were verified by an alternate test, the Cumulative Distribution Function algorithm, applied to the same groups of lipoproteins.

  15. Biomarker Discovery Based on Hybrid Optimization Algorithm and Artificial Neural Networks on Microarray Data for Cancer Classification.

    PubMed

    Moteghaed, Niloofar Yousefi; Maghooli, Keivan; Pirhadi, Shiva; Garshasbi, Masoud

    2015-01-01

    The improvement of high-through-put gene profiling based microarrays technology has provided monitoring the expression value of thousands of genes simultaneously. Detailed examination of changes in expression levels of genes can help physicians to have efficient diagnosing, classification of tumors and cancer's types as well as effective treatments. Finding genes that can classify the group of cancers correctly based on hybrid optimization algorithms is the main purpose of this paper. In this paper, a hybrid particle swarm optimization and genetic algorithm method are used for gene selection and also artificial neural network (ANN) is adopted as the classifier. In this work, we have improved the ability of the algorithm for the classification problem by finding small group of biomarkers and also best parameters of the classifier. The proposed approach is tested on three benchmark gene expression data sets: Blood (acute myeloid leukemia, acute lymphoblastic leukemia), colon and breast datasets. We used 10-fold cross-validation to achieve accuracy and also decision tree algorithm to find the relation between the biomarkers for biological point of view. To test the ability of the trained ANN models to categorize the cancers, we analyzed additional blinded samples that were not previously used for the training procedure. Experimental results show that the proposed method can reduce the dimension of the data set and confirm the most informative gene subset and improve classification accuracy with best parameters based on datasets.

  16. Wreck finding and classifying with a sonar filter

    NASA Astrophysics Data System (ADS)

    Agehed, Kenneth I.; Padgett, Mary Lou; Becanovic, Vlatko; Bornich, C.; Eide, Age J.; Engman, Per; Globoden, O.; Lindblad, Thomas; Lodgberg, K.; Waldemark, Karina E.

    1999-03-01

    Sonar detection and classification of sunken wrecks and other objects is of keen interest to many. This paper describes the use of neural networks (NN) for locating, classifying and determining the alignment of objects on a lakebed in Sweden. A complex program for data preprocessing and visualization was developed. Part of this program, The Sonar Viewer, facilitates training and testing of the NN using (1) the MATLAB Neural Networks Toolbox for multilayer perceptrons with backpropagation (BP) and (2) the neural network O-Algorithm (OA) developed by Age Eide and Thomas Lindblad. Comparison of the performance of the two neural networks approaches indicates that, for this data BP generalizes better than OA, but use of OA eliminates the need for training on non-target (lake bed) images. The OA algorithm does not work well with the smaller ships. Increasing the resolution to counteract this problem would slow down processing and require interpolation to suggest data values between the actual sonar measurements. In general, good results were obtained for recognizing large wrecks and determining their alignment. The programs developed a useful tool for further study of sonar signals in many environments. Recent developments in pulse coupled neural networks techniques provide an opportunity to extend the use in real-world applications where experimental data is difficult, expensive or time consuming to obtain.

  17. On a two-level multiclassifier system with error correction applied to the control of bioprosthetic hand.

    PubMed

    Kurzynski, Marek

    2013-01-01

    The paper presents an advanced method of recognition of patient's intention to move of hand prosthesis. The proposed method is based on two-level multiclassifier system (MCS) with homogeneous base classifiers dedicated to EEG, EMG and MMG biosignals and with combining mechanism using a dynamic ensemble selection (DES) scheme and probabilistic competence function. Additionally, the feedback signal derived from the prosthesis sensors is applied to the correction of classification algorithm. The performance of MCS with proposed competence function and combining procedure were experimentally compared against three benchmark MCSs using real data concerning the recognition of six types of grasping movements. The systems developed achieved the highest classification accuracies demonstrating the potential of multiple classifier systems with multimodal biosignals for the control of bioprosthetic hand.

  18. Developing a computer algorithm to identify epilepsy cases in managed care organizations.

    PubMed

    Holden, E Wayne; Grossman, Elizabeth; Nguyen, Hoang Thanh; Gunter, Margaret J; Grebosky, Becky; Von Worley, Ann; Nelson, Leila; Robinson, Scott; Thurman, David J

    2005-02-01

    The goal of this study was to develop an algorithm for detecting epilepsy cases in managed care organizations (MCOs). A data set of potential epilepsy cases was constructed from an MCO's administrative data system for all health plan members continuously enrolled in the MCO for at least 1 year within the study period of July 1, 1996 through June 30, 1998. Epilepsy status was determined using medical record review for a sample of 617 cases. The best algorithm for detecting epilepsy cases was developed by examining combinations of diagnosis, diagnostic procedures, and medication use. The best algorithm derived in the exploratory phase was then applied to a new set of data from the same MCO covering the period of July 1, 1998 through June 30, 2000. A stratified sample based on ethnicity and age was drawn from the preliminary algorithm-identified epilepsy cases and non-cases. Medical record review was completed for 644 cases to determine the accuracy of the algorithm. Data from both phases were combined to permit refinement of logistic regression models and to provide more stable estimates of the parameters. The best model used diagnoses and antiepileptic drugs as predictors and had a positive predictive value of 84% (sensitivity 82%, specificity 94%). The best model correctly classified 90% of the cases. A stable algorithm that can be used to identify epilepsy patients within MCOs was developed. Implications for use of the algorithm in other health care settings are discussed.

  19. Fast autodidactic adaptive equalization algorithms

    NASA Astrophysics Data System (ADS)

    Hilal, Katia

    Autodidactic equalization by adaptive filtering is addressed in a mobile radio communication context. A general method, using an adaptive stochastic gradient Bussgang type algorithm, to deduce two low cost computation algorithms is given: one equivalent to the initial algorithm and the other having improved convergence properties thanks to a block criteria minimization. Two start algorithms are reworked: the Godard algorithm and the decision controlled algorithm. Using a normalization procedure, and block normalization, the performances are improved, and their common points are evaluated. These common points are used to propose an algorithm retaining the advantages of the two initial algorithms. This thus inherits the robustness of the Godard algorithm and the precision and phase correction of the decision control algorithm. The work is completed by a study of the stable states of Bussgang type algorithms and of the stability of the Godard algorithms, initial and normalized. The simulation of these algorithms, carried out in a mobile radio communications context, and under severe conditions on the propagation channel, gave a 75% reduction in the number of samples required for the processing in relation with the initial algorithms. The improvement of the residual error was of a much lower return. These performances are close to making possible the use of autodidactic equalization in the mobile radio system.

  20. Evaluating EMG Feature and Classifier Selection for Application to Partial-Hand Prosthesis Control.

    PubMed

    Adewuyi, Adenike A; Hargrove, Levi J; Kuiken, Todd A

    2016-01-01

    Pattern recognition-based myoelectric control of upper-limb prostheses has the potential to restore control of multiple degrees of freedom. Though this control method has been extensively studied in individuals with higher-level amputations, few studies have investigated its effectiveness for individuals with partial-hand amputations. Most partial-hand amputees retain a functional wrist and the ability of pattern recognition-based methods to correctly classify hand motions from different wrist positions is not well studied. In this study, focusing on partial-hand amputees, we evaluate (1) the performance of non-linear and linear pattern recognition algorithms and (2) the performance of optimal EMG feature subsets for classification of four hand motion classes in different wrist positions for 16 non-amputees and 4 amputees. Our results show that linear discriminant analysis and linear and non-linear artificial neural networks perform significantly better than the quadratic discriminant analysis for both non-amputees and partial-hand amputees. For amputees, including information from multiple wrist positions significantly decreased error (p < 0.001) but no further significant decrease in error occurred when more than 4, 2, or 3 positions were included for the extrinsic (p = 0.07), intrinsic (p = 0.06), or combined extrinsic and intrinsic muscle EMG (p = 0.08), respectively. Finally, we found that a feature set determined by selecting optimal features from each channel outperformed the commonly used time domain (p < 0.001) and time domain/autoregressive feature sets (p < 0.01). This method can be used as a screening filter to select the features from each channel that provide the best classification of hand postures across different wrist positions.

  1. Evaluating EMG Feature and Classifier Selection for Application to Partial-Hand Prosthesis Control

    PubMed Central

    Adewuyi, Adenike A.; Hargrove, Levi J.; Kuiken, Todd A.

    2016-01-01

    Pattern recognition-based myoelectric control of upper-limb prostheses has the potential to restore control of multiple degrees of freedom. Though this control method has been extensively studied in individuals with higher-level amputations, few studies have investigated its effectiveness for individuals with partial-hand amputations. Most partial-hand amputees retain a functional wrist and the ability of pattern recognition-based methods to correctly classify hand motions from different wrist positions is not well studied. In this study, focusing on partial-hand amputees, we evaluate (1) the performance of non-linear and linear pattern recognition algorithms and (2) the performance of optimal EMG feature subsets for classification of four hand motion classes in different wrist positions for 16 non-amputees and 4 amputees. Our results show that linear discriminant analysis and linear and non-linear artificial neural networks perform significantly better than the quadratic discriminant analysis for both non-amputees and partial-hand amputees. For amputees, including information from multiple wrist positions significantly decreased error (p < 0.001) but no further significant decrease in error occurred when more than 4, 2, or 3 positions were included for the extrinsic (p = 0.07), intrinsic (p = 0.06), or combined extrinsic and intrinsic muscle EMG (p = 0.08), respectively. Finally, we found that a feature set determined by selecting optimal features from each channel outperformed the commonly used time domain (p < 0.001) and time domain/autoregressive feature sets (p < 0.01). This method can be used as a screening filter to select the features from each channel that provide the best classification of hand postures across different wrist positions. PMID:27807418

  2. Massively Multi-core Acceleration of a Document-Similarity Classifier to Detect Web Attacks

    SciTech Connect

    Ulmer, C; Gokhale, M; Top, P; Gallagher, B; Eliassi-Rad, T

    2010-01-14

    This paper describes our approach to adapting a text document similarity classifier based on the Term Frequency Inverse Document Frequency (TFIDF) metric to two massively multi-core hardware platforms. The TFIDF classifier is used to detect web attacks in HTTP data. In our parallel hardware approaches, we design streaming, real time classifiers by simplifying the sequential algorithm and manipulating the classifier's model to allow decision information to be represented compactly. Parallel implementations on the Tilera 64-core System on Chip and the Xilinx Virtex 5-LX FPGA are presented. For the Tilera, we employ a reduced state machine to recognize dictionary terms without requiring explicit tokenization, and achieve throughput of 37MB/s at slightly reduced accuracy. For the FPGA, we have developed a set of software tools to help automate the process of converting training data to synthesizable hardware and to provide a means of trading off between accuracy and resource utilization. The Xilinx Virtex 5-LX implementation requires 0.2% of the memory used by the original algorithm. At 166MB/s (80X the software) the hardware implementation is able to achieve Gigabit network throughput at the same accuracy as the original algorithm.

  3. Classifier-guided sampling for discrete variable, discontinuous design space exploration: Convergence and computational performance

    SciTech Connect

    Backlund, Peter B.; Shahan, David W.; Seepersad, Carolyn Conner

    2014-04-22

    A classifier-guided sampling (CGS) method is introduced for solving engineering design optimization problems with discrete and/or continuous variables and continuous and/or discontinuous responses. The method merges concepts from metamodel-guided sampling and population-based optimization algorithms. The CGS method uses a Bayesian network classifier for predicting the performance of new designs based on a set of known observations or training points. Unlike most metamodeling techniques, however, the classifier assigns a categorical class label to a new design, rather than predicting the resulting response in continuous space, and thereby accommodates nondifferentiable and discontinuous functions of discrete or categorical variables. The CGS method uses these classifiers to guide a population-based sampling process towards combinations of discrete and/or continuous variable values with a high probability of yielding preferred performance. Accordingly, the CGS method is appropriate for discrete/discontinuous design problems that are ill-suited for conventional metamodeling techniques and too computationally expensive to be solved by population-based algorithms alone. In addition, the rates of convergence and computational properties of the CGS method are investigated when applied to a set of discrete variable optimization problems. Results show that the CGS method significantly improves the rate of convergence towards known global optima, on average, when compared to genetic algorithms.

  4. Identifying organic-rich Marcellus Shale lithofacies by support vector machine classifier in the Appalachian basin

    NASA Astrophysics Data System (ADS)

    Wang, Guochang; Carr, Timothy R.; Ju, Yiwen; Li, Chaofeng

    2014-03-01

    Unconventional shale reservoirs as the result of extremely low matrix permeability, higher potential gas productivity requires not only sufficient gas-in-place, but also a high concentration of brittle minerals (silica and/or carbonate) that is amenable to hydraulic fracturing. Shale lithofacies is primarily defined by mineral composition and organic matter richness, and its representation as a 3-D model has advantages in recognizing productive zones of shale-gas reservoirs, designing horizontal wells and stimulation strategy, and aiding in understanding depositional process of organic-rich shale. A challenging and key step is to effectively recognize shale lithofacies from well conventional logs, where the relationship is very complex and nonlinear. In the recognition of shale lithofacies, the application of support vector machine (SVM), which underlies statistical learning theory and structural risk minimization principle, is superior to the traditional empirical risk minimization principle employed by artificial neural network (ANN). We propose SVM classifier combined with learning algorithms, such as grid searching, genetic algorithm and particle swarm optimization, and various kernel functions the approach to identify Marcellus Shale lithofacies. Compared with ANN classifiers, the experimental results of SVM classifiers showed higher cross-validation accuracy, better stability and less computational time cost. The SVM classifier with radius basis function as kernel worked best as it is trained by particle swarm optimization. The lithofacies predicted using the SVM classifier are used to build a 3-D Marcellus Shale lithofacies model, which assists in identifying higher productive zones, especially with thermal maturity and natural fractures.

  5. A new algorithm for attitude-independent magnetometer calibration

    NASA Technical Reports Server (NTRS)

    Alonso, Roberto; Shuster, Malcolm D.

    1994-01-01

    A new algorithm is developed for inflight magnetometer bias determination without knowledge of the attitude. This algorithm combines the fast convergence of a heuristic algorithm currently in use with the correct treatment of the statistics and without discarding data. The algorithm performance is examined using simulated data and compared with previous algorithms.

  6. Harmony Search Algorithm for Word Sense Disambiguation

    PubMed Central

    Abed, Saad Adnan; Tiun, Sabrina; Omar, Nazlia

    2015-01-01

    Word Sense Disambiguation (WSD) is the task of determining which sense of an ambiguous word (word with multiple meanings) is chosen in a particular use of that word, by considering its context. A sentence is considered ambiguous if it contains ambiguous word(s). Practically, any sentence that has been classified as ambiguous usually has multiple interpretations, but just one of them presents the correct interpretation. We propose an unsupervised method that exploits knowledge based approaches for word sense disambiguation using Harmony Search Algorithm (HSA) based on a Stanford dependencies generator (HSDG). The role of the dependency generator is to parse sentences to obtain their dependency relations. Whereas, the goal of using the HSA is to maximize the overall semantic similarity of the set of parsed words. HSA invokes a combination of semantic similarity and relatedness measurements, i.e., Jiang and Conrath (jcn) and an adapted Lesk algorithm, to perform the HSA fitness function. Our proposed method was experimented on benchmark datasets, which yielded results comparable to the state-of-the-art WSD methods. In order to evaluate the effectiveness of the dependency generator, we perform the same methodology without the parser, but with a window of words. The empirical results demonstrate that the proposed method is able to produce effective solutions for most instances of the datasets used. PMID:26422368

  7. Harmony Search Algorithm for Word Sense Disambiguation.

    PubMed

    Abed, Saad Adnan; Tiun, Sabrina; Omar, Nazlia

    2015-01-01

    Word Sense Disambiguation (WSD) is the task of determining which sense of an ambiguous word (word with multiple meanings) is chosen in a particular use of that word, by considering its context. A sentence is considered ambiguous if it contains ambiguous word(s). Practically, any sentence that has been classified as ambiguous usually has multiple interpretations, but just one of them presents the correct interpretation. We propose an unsupervised method that exploits knowledge based approaches for word sense disambiguation using Harmony Search Algorithm (HSA) based on a Stanford dependencies generator (HSDG). The role of the dependency generator is to parse sentences to obtain their dependency relations. Whereas, the goal of using the HSA is to maximize the overall semantic similarity of the set of parsed words. HSA invokes a combination of semantic similarity and relatedness measurements, i.e., Jiang and Conrath (jcn) and an adapted Lesk algorithm, to perform the HSA fitness function. Our proposed method was experimented on benchmark datasets, which yielded results comparable to the state-of-the-art WSD methods. In order to evaluate the effectiveness of the dependency generator, we perform the same methodology without the parser, but with a window of words. The empirical results demonstrate that the proposed method is able to produce effective solutions for most instances of the datasets used.

  8. Intelligent neural network classifier for automatic testing

    NASA Astrophysics Data System (ADS)

    Bai, Baoxing; Yu, Heping

    1996-10-01

    This paper is concerned with an application of a multilayer feedforward neural network for the vision detection of industrial pictures, and introduces a high characteristics image processing and recognizing system which can be used for real-time testing blemishes, streaks and cracks, etc. on the inner walls of high-accuracy pipes. To take full advantage of the functions of the artificial neural network, such as the information distributed memory, large scale self-adapting parallel processing, high fault-tolerance ability, this system uses a multilayer perceptron as a regular detector to extract features of the images to be inspected and classify them.

  9. Classifying Bugs is a Tricky Business.

    DTIC Science & Technology

    1983-08-01

    REPORT II PERIOD COVERED Classifying Bugs is a Tricky Business Technical 6. PERFORMING *"a. REPORT "UNDER 7- AUTHON(a S. CONTRACT on GRANT MuNDER () W...WRITELN(’ BAD INPUT. TRY AGAIN’); READ(RAINFALL) END; IF RAINFALL 4) 99999 THEN BEGIN TOTAL TOTAL + RAINFALL; DAYS DAYS + 1; READ(RAINFALL) END; END...this last question. READ(RAINFALL) WHILE RAINFALL 0, 99999 DO BEGIN WHILE RAINFALL < 0 DO BEGIN VRITELN(’ BAD INPUT. TRY AGAIN’); READ(RAINFALL) END

  10. Decision fusion and non-parametric classifiers for land use mapping using multi-temporal RapidEye data

    NASA Astrophysics Data System (ADS)

    Löw, Fabian; Conrad, Christopher; Michel, Ulrich

    2015-10-01

    This study addressed the classification of multi-temporal satellite data from RapidEye by considering different classifier algorithms and decision fusion. Four non-parametric classifier algorithms, decision tree (DT), random forest (RF), support vector machine (SVM), and multilayer perceptron (MLP), were applied to map crop types in various irrigated landscapes in Central Asia. A novel decision fusion strategy to combine the outputs of the classifiers was proposed. This approach is based on randomly selecting subsets of the input dataset and aggregating the probabilistic outputs of the base classifiers with another meta-classifier. During the decision fusion, the reliability of each base classifier algorithm was considered to exclude less reliable inputs at the class-basis. The spatial and temporal transferability of the classifiers was evaluated using data sets from four different agricultural landscapes with different spatial extents and from different years. A detailed accuracy assessment showed that none of the stand-alone classifiers was the single best performing. Despite the very good performance of the base classifiers, there was still up to 50% disagreement in the maps produced by the two single best classifiers, RF and SVM. The proposed fusion strategy, however, increased overall accuracies up to 6%. In addition, it was less sensitive to reduced training set sizes and produced more realistic land use maps with less speckle. The proposed fusion approach was better transferable to data sets from other years, i.e. resulted in higher accuracies for the investigated classes. The fusion approach is computationally efficient and appears well suited for mapping diverse crop categories based on sensors with a similar high repetition rate and spatial resolution like RapidEye, for instance the upcoming Sentinel-2 mission.

  11. Genetic algorithms

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  12. 46 CFR 503.59 - Safeguarding classified information.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Information Security Program § 503.59 Safeguarding classified information. (a) All classified information... security; (2) Takes appropriate steps to protect classified information from unauthorized disclosure or... security check; (2) To protect the classified information in accordance with the provisions of...

  13. 70. PRIMARY MILL AND CLASSIFIER No. 2 FROM NORTHWEST. MILL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    70. PRIMARY MILL AND CLASSIFIER No. 2 FROM NORTHWEST. MILL DISCHARGED INTO LAUNDER WHICH PIERCED THE SIDE OF THE CLASSIFIER PAN. WOOD LAUNDER WITHIN CLASSIFIER VISIBLE (FILLED WITH DEBRIS). HORIZONTAL WOOD PLANKING BEHIND MILL IS FEED BOX. MILL SOLUTION PIPING RUNS ALONG BASE OF WEST SIDE OF CLASSIFIER. - Bald Mountain Gold Mill, Nevada Gulch at head of False Bottom Creek, Lead, Lawrence County, SD

  14. 49 CFR 1280.6 - Storage of classified documents.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SECURITY INFORMATION AND CLASSIFIED MATERIAL § 1280.6 Storage of classified documents. All classified... 49 Transportation 9 2010-10-01 2010-10-01 false Storage of classified documents. 1280.6 Section 1280.6 Transportation Other Regulations Relating to Transportation (Continued) SURFACE...

  15. 46 CFR 503.59 - Safeguarding classified information.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Information Security Program § 503.59 Safeguarding classified information. (a) All classified information... security; (2) Takes appropriate steps to protect classified information from unauthorized disclosure or... security check; (2) To protect the classified information in accordance with the provisions of...

  16. 46 CFR 503.59 - Safeguarding classified information.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Information Security Program § 503.59 Safeguarding classified information. (a) All classified information... security; (2) Takes appropriate steps to protect classified information from unauthorized disclosure or... security check; (2) To protect the classified information in accordance with the provisions of...

  17. 36 CFR 1256.46 - National security-classified information.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false National security-classified... Restrictions § 1256.46 National security-classified information. In accordance with 5 U.S.C. 552(b)(1), NARA... properly classified under the provisions of the pertinent Executive Order on Classified National...

  18. 36 CFR 1256.46 - National security-classified information.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true National security-classified... Restrictions § 1256.46 National security-classified information. In accordance with 5 U.S.C. 552(b)(1), NARA... properly classified under the provisions of the pertinent Executive Order on Classified National...

  19. 36 CFR 1256.46 - National security-classified information.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false National security-classified... Restrictions § 1256.46 National security-classified information. In accordance with 5 U.S.C. 552(b)(1), NARA... properly classified under the provisions of the pertinent Executive Order on Classified National...

  20. 5 CFR 1312.23 - Access to classified information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Access to classified information. 1312.23... Classified Information § 1312.23 Access to classified information. Classified information may be made... “need to know” and the access is essential to the accomplishment of official government duties....

  1. Machine Learning Methods for Classifying Human Physical Activity from On-Body Accelerometers

    PubMed Central

    Mannini, Andrea; Sabatini, Angelo Maria

    2010-01-01

    The use of on-body wearable sensors is widespread in several academic and industrial domains. Of great interest are their applications in ambulatory monitoring and pervasive computing systems; here, some quantitative analysis of human motion and its automatic classification are the main computational tasks to be pursued. In this paper, we discuss how human physical activity can be classified using on-body accelerometers, with a major emphasis devoted to the computational algorithms employed for this purpose. In particular, we motivate our current interest for classifiers based on Hidden Markov Models (HMMs). An example is illustrated and discussed by analysing a dataset of accelerometer time series. PMID:22205862

  2. Bi-convex Optimization to Learn Classifiers from Multiple Biomedical Annotations.

    PubMed

    Wang, Xin; Bi, Jinbo

    2016-06-07

    The problem of constructing classifiers from multiple annotators who provide inconsistent training labels is important and occurs in many application domains. Many existing methods focus on the understanding and learning of the crowd behaviors. Several probabilistic algorithms consider the construction of classifiers for specific tasks using consensus of multiple labelers annotations. These methods impose a prior on the consensus and develop an expectation-maximization algorithm based on logistic regression loss. We extend the discussion to the hinge loss commonly used by support vector machines. Our formulations form bi-convex programs that construct classifiers and estimate the reliability of each labeler simultaneously. Each labeler is associated with a reliability parameter, which can be a constant, or class-dependent, or varies for different examples. The hinge loss is modified by replacing the true labels by the weighted combination of labelers' labels with reliabilities as weights. Statistical justification is discussed to motivate the use of linear combination of labels. In parallel to the expectation-maximization algorithm for logistic based methods, efficient alternating algorithms are developed to solve the proposed bi-convex programs. Experimental results on benchmark datasets and three real-world biomedical problems demonstrate that the proposed methods either outperform or are competitive to the state of the art.

  3. Classifying multispectral data by neural networks

    NASA Technical Reports Server (NTRS)

    Telfer, Brian A.; Szu, Harold H.; Kiang, Richard K.

    1993-01-01

    Several energy functions for synthesizing neural networks are tested on 2-D synthetic data and on Landsat-4 Thematic Mapper data. These new energy functions, designed specifically for minimizing misclassification error, in some cases yield significant improvements in classification accuracy over the standard least mean squares energy function. In addition to operating on networks with one output unit per class, a new energy function is tested for binary encoded outputs, which result in smaller network sizes. The Thematic Mapper data (four bands were used) is classified on a single pixel basis, to provide a starting benchmark against which further improvements will be measured. Improvements are underway to make use of both subpixel and superpixel (i.e. contextual or neighborhood) information in tile processing. For single pixel classification, the best neural network result is 78.7 percent, compared with 71.7 percent for a classical nearest neighbor classifier. The 78.7 percent result also improves on several earlier neural network results on this data.

  4. Mercury⊕: An evidential reasoning image classifier

    NASA Astrophysics Data System (ADS)

    Peddle, Derek R.

    1995-12-01

    MERCURY⊕ is a multisource evidential reasoning classification software system based on the Dempster-Shafer theory of evidence. The design and implementation of this software package is described for improving the classification and analysis of multisource digital image data necessary for addressing advanced environmental and geoscience applications. In the remote-sensing context, the approach provides a more appropriate framework for classifying modern, multisource, and ancillary data sets which may contain a large number of disparate variables with different statistical properties, scales of measurement, and levels of error which cannot be handled using conventional Bayesian approaches. The software uses a nonparametric, supervised approach to classification, and provides a more objective and flexible interface to the evidential reasoning framework using a frequency-based method for computing support values from training data. The MERCURY⊕ software package has been implemented efficiently in the C programming language, with extensive use made of dynamic memory allocation procedures and compound linked list and hash-table data structures to optimize the storage and retrieval of evidence in a Knowledge Look-up Table. The software is complete with a full user interface and runs under Unix, Ultrix, VAX/VMS, MS-DOS, and Apple Macintosh operating system. An example of classifying alpine land cover and permafrost active layer depth in northern Canada is presented to illustrate the use and application of these ideas.

  5. Rotational Study of Ambiguous Taxonomic Classified Asteroids

    NASA Astrophysics Data System (ADS)

    Linder, Tyler R.; Sanchez, Rick; Wuerker, Wolfgang; Clayson, Timothy; Giles, Tucker

    2017-01-01

    The Sloan Digital Sky Survey (SDSS) moving object catalog (MOC4) provided the largest ever catalog of asteroid spectrophotometry observations. Carvano et al. (2010), while analyzing MOC4, discovered that individual observations of asteroids which were observed multiple times did not classify into the same photometric-based taxonomic class. A small subset of those asteroids were classified as having both the presence and absence of a 1um silicate absorption feature. If these variations are linked to differences in surface mineralogy, the prevailing assumption that an asteroid’s surface composition is predominantly homogenous would need to be reexamined. Furthermore, our understanding of the evolution of the asteroid belt, as well as the linkage between certain asteroids and meteorite types may need to be modified.This research is an investigation to determine the rotational rates of these taxonomically ambiguous asteroids. Initial questions to be answered:Do these asteroids have unique or nonstandard rotational rates?Is there any evidence in their light curve to suggest an abnormality?Observations were taken using PROMPT6 a 0.41-m telescope apart of the SKYNET network at Cerro Tololo Inter-American Observatory (CTIO). Observations were calibrated and analyzed using Canopus software. Initial results will be presented at AAS.

  6. A Framework for Classifying Decision Support Systems

    PubMed Central

    Sim, Ida; Berlin, Amy

    2003-01-01

    Background Computer-based clinical decision support systems (CDSSs) vary greatly in design and function. A taxonomy for classifying CDSS structure and function would help efforts to describe and understand the variety of CDSSs in the literature, and to explore predictors of CDSS effectiveness and generalizability. Objective To define and test a taxonomy for characterizing the contextual, technical, and workflow features of CDSSs. Methods We retrieved and analyzed 150 English language articles published between 1975 and 2002 that described computer systems designed to assist physicians and/or patients with clinical decision making. We identified aspects of CDSS structure or function and iterated our taxonomy until additional article reviews did not result in any new descriptors or taxonomic modifications. Results Our taxonomy comprises 95 descriptors along 24 descriptive axes. These axes are in 5 categories: Context, Knowledge and Data Source, Decision Support, Information Delivery, and Workflow. The axes had an average of 3.96 coded choices each. 75% of the descriptors had an inter-rater agreement kappa of greater than 0.6. Conclusions We have defined and tested a comprehensive, multi-faceted taxonomy of CDSSs that shows promising reliability for classifying CDSSs reported in the literature. PMID:14728243

  7. Using Statistical Techniques and Web Search to Correct ESL Errors

    ERIC Educational Resources Information Center

    Gamon, Michael; Leacock, Claudia; Brockett, Chris; Dolan, William B.; Gao, Jianfeng; Belenko, Dmitriy; Klementiev, Alexandre

    2009-01-01

    In this paper we present a system for automatic correction of errors made by learners of English. The system has two novel aspects. First, machine-learned classifiers trained on large amounts of native data and a very large language model are combined to optimize the precision of suggested corrections. Second, the user can access real-life web…

  8. SU-E-T-304: Dosimetric Comparison of Cavernous Sinus Tumors: Heterogeneity Corrected Pencil Beam (PB-Hete) Vs. X-Ray Voxel Monte Carlo (XVMC) Algorithms for Stereotactic Radiotherapy (SRT)

    SciTech Connect

    Pokhrel, D; Sood, S; Badkul, R; Jiang, H; Saleh, H; Wang, F

    2015-06-15

    Purpose: To compare dose distributions calculated using PB-hete vs. XVMC algorithms for SRT treatments of cavernous sinus tumors. Methods: Using PB-hete SRT, five patients with cavernous sinus tumors received the prescription dose of 25 Gy in 5 fractions for planning target volume PTV(V100%)=95%. Gross tumor volume (GTV) and organs at risk (OARs) were delineated on T1/T2 MRI-CT-fused images. PTV (range 2.1–84.3cc, mean=21.7cc) was generated using a 5mm uniform-margin around GTV. PB-hete SRT plans included a combination of non-coplanar conformal arcs/static beams delivered by Novalis-TX consisting of HD-MLCs and a 6MV-SRS(1000 MU/min) beam. Plans were re-optimized using XVMC algorithm with identical beam geometry and MLC positions. Comparison of plan specific PTV(V99%), maximal, mean, isocenter doses, and total monitor units(MUs) were evaluated. Maximal dose to OARs such as brainstem, optic-pathway, spinal cord, and lenses as well as normal tissue volume receiving 12Gy(V12) were compared between two algorithms. All analysis was performed using two-tailed paired t-tests of an upper-bound p-value of <0.05. Results: Using either algorithm, no dosimetrically significant differences in PTV coverage (PTVV99%,maximal, mean, isocenter doses) and total number of MUs were observed (all p-values >0.05, mean ratios within 2%). However, maximal doses to optic-chiasm and nerves were significantly under-predicted using PB-hete (p=0.04). Maximal brainstem, spinal cord, lens dose and V12 were all comparable between two algorithms, with exception of one patient with the largest PTV who exhibited 11% higher V12 with XVMC. Conclusion: Unlike lung tumors, XVMC and PB-hete treatment plans provided similar PTV coverage for cavernous sinus tumors. Majority of OARs doses were comparable between two algorithms, except for small structures such as optic chiasm/nerves which could potentially receive higher doses when using XVMC algorithm. Special attention may need to be paid on a case

  9. A convolutional neural network neutrino event classifier

    DOE PAGES

    Aurisano, A.; Radovic, A.; Rocco, D.; ...

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology withoutmore » the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.« less

  10. A convolutional neural network neutrino event classifier

    SciTech Connect

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Here, convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  11. A convolutional neural network neutrino event classifier

    NASA Astrophysics Data System (ADS)

    Aurisano, A.; Radovic, A.; Rocco, D.; Himmel, A.; Messier, M. D.; Niner, E.; Pawloski, G.; Psihas, F.; Sousa, A.; Vahle, P.

    2016-09-01

    Convolutional neural networks (CNNs) have been widely applied in the computer vision community to solve complex problems in image recognition and analysis. We describe an application of the CNN technology to the problem of identifying particle interactions in sampling calorimeters used commonly in high energy physics and high energy neutrino physics in particular. Following a discussion of the core concepts of CNNs and recent innovations in CNN architectures related to the field of deep learning, we outline a specific application to the NOvA neutrino detector. This algorithm, CVN (Convolutional Visual Network) identifies neutrino interactions based on their topology without the need for detailed reconstruction and outperforms algorithms currently in use by the NOvA collaboration.

  12. Classifier-based latency estimation: a novel way to estimate and predict BCI accuracy

    NASA Astrophysics Data System (ADS)

    Thompson, David E.; Warschausky, Seth; Huggins, Jane E.

    2013-02-01

    Objective. Brain-computer interfaces (BCIs) that detect event-related potentials (ERPs) rely on classification schemes that are vulnerable to latency jitter, a phenomenon known to occur with ERPs such as the P300 response. The objective of this work was to investigate the role that latency jitter plays in BCI classification. Approach. We developed a novel method, classifier-based latency estimation (CBLE), based on a generalization of Woody filtering. The technique works by presenting the time-shifted data to the classifier, and using the time shift that corresponds to the maximal classifier score. Main results. The variance of CBLE estimates correlates significantly (p < 10-42) with BCI accuracy in the Farwell-Donchin BCI paradigm. Additionally, CBLE predicts same-day accuracy, even from small datasets or datasets that have already been used for classifier training, better than the accuracy on the small dataset (p < 0.05). The technique should be relatively classifier-independent, and the results were confirmed on two linear classifiers. Significance. The results suggest that latency jitter may be an important cause of poor BCI performance, and methods that correct for latency jitter may improve that performance. CBLE can also be used to decrease the amount of data needed for accuracy estimation, allowing research on effects with shorter timescales.

  13. Using naive Bayes classifier for classification of convective rainfall intensities based on spectral characteristics retrieved from SEVIRI

    NASA Astrophysics Data System (ADS)

    Hameg, Slimane; Lazri, Mourad; Ameur, Soltane

    2016-07-01

    This paper presents a new algorithm to classify convective clouds and determine their intensity, based on cloud physical properties retrieved from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). The convective rainfall events at 15 min, 4 × 5 km spatial resolution from 2006 to 2012 are analysed over northern Algeria. The convective rain classification methodology makes use of the relationship between cloud spectral characteristics and cloud physical properties such as cloud water path (CWP), cloud phase (CP) and cloud top height (CTH). For this classification, a statistical method based on `naive Bayes classifier' is applied. This is a simple probabilistic classifier based on applying `Bayes' theorem with strong (naive) independent assumptions. For a 9-month period, the ability of SEVIRI to classify the rainfall intensity in the convective clouds is evaluated using weather radar over the northern Algeria. The results indicate an encouraging performance of the new algorithm for intensity differentiation of convective clouds using SEVIRI data.

  14. A machine learned classifier for RR Lyrae in the VVV survey

    NASA Astrophysics Data System (ADS)

    Elorrieta, Felipe; Eyheramendy, Susana; Jordán, Andrés; Dékány, István; Catelan, Márcio; Angeloni, Rodolfo; Alonso-García, Javier; Contreras-Ramos, Rodrigo; Gran, Felipe; Hajdu, Gergely; Espinoza, Néstor; Saito, Roberto K.; Minniti, Dante

    2016-11-01

    Variable stars of RR Lyrae type are a prime tool with which to obtain distances to old stellar populations in the Milky Way. One of the main aims of the Vista Variables in the Via Lactea (VVV) near-infrared survey is to use them to map the structure of the Galactic Bulge. Owing to the large number of expected sources, this requires an automated mechanism for selecting RR Lyrae, and particularly those of the more easily recognized type ab (i.e., fundamental-mode pulsators), from the 106-107 variables expected in the VVV survey area. In this work we describe a supervised machine-learned classifier constructed for assigning a score to a Ks-band VVV light curve that indicates its likelihood of being ab-type RR Lyrae. We describe the key steps in the construction of the classifier, which were the choice of features, training set, selection of aperture, and family of classifiers. We find that the AdaBoost family of classifiers give consistently the best performance for our problem, and obtain a classifier based on the AdaBoost algorithm that achieves a harmonic mean between false positives and false negatives of ≈7% for typical VVV light-curve sets. This performance is estimated using cross-validation and through the comparison to two independent datasets that were classified by human experts.

  15. 77 FR 2435 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-18

    ...- Free Treatment Under the Generalized System of Preferences and for Other Purposes Correction In... following correction: On page 407, the date following the proclamation number should read ``December...

  16. 78 FR 2193 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-10

    ... United States-Panama Trade Promotion Agreement and for Other Purposes Correction In Presidential document... correction: On page 66507, the proclamation identification heading on line one should read...

  17. Classification accuracy of algorithms for blood chemistry data for three aquaculture-affected marine fish species.

    PubMed

    Coz-Rakovac, R; Topic Popovic, N; Smuc, T; Strunjak-Perovic, I; Jadan, M

    2009-11-01

    The objective of this study was determination and discrimination of biochemical data among three aquaculture-affected marine fish species (sea bass, Dicentrarchus labrax; sea bream, Sparus aurata L., and mullet, Mugil spp.) based on machine-learning methods. The approach relying on machine-learning methods gives more usable classification solutions and provides better insight into the collected data. So far, these new methods have been applied to the problem of discrimination of blood chemistry data with respect to season and feed of a single species. This is the first time these classification algorithms have been used as a framework for rapid differentiation among three fish species. Among the machine-learning methods used, decision trees provided the clearest model, which correctly classified 210 samples or 85.71%, and incorrectly classified 35 samples or 14.29% and clearly identified three investigated species from their biochemical traits.

  18. Just-in-time adaptive classifiers-part II: designing the classifier.

    PubMed

    Alippi, Cesare; Roveri, Manuel

    2008-12-01

    Aging effects, environmental changes, thermal drifts, and soft and hard faults affect physical systems by changing their nature and behavior over time. To cope with a process evolution adaptive solutions must be envisaged to track its dynamics; in this direction, adaptive classifiers are generally designed by assuming the stationary hypothesis for the process generating the data with very few results addressing nonstationary environments. This paper proposes a methodology based on k-nearest neighbor (NN) classifiers for designing adaptive classification systems able to react to changing conditions just-in-time (JIT), i.e., exactly when it is needed. k-NN classifiers have been selected for their computational-free training phase, the possibility to easily estimate the model complexity k and keep under control the computational complexity of the classifier through suitable data reduction mechanisms. A JIT classifier requires a temporal detection of a (possible) process deviation (aspect tackled in a companion paper) followed by an adaptive management of the knowledge base (KB) of the classifier to cope with the process change. The novelty of the proposed approach resides in the general framework supporting the real-time update of the KB of the classification system in response to novel information coming from the process both in stationary conditions (accuracy improvement) and in nonstationary ones (process tracking) and in providing a suitable estimate of k. It is shown that the classification system grants consistency once the change targets the process generating the data in a new stationary state, as it is the case in many real applications.

  19. Classifying prion and prion-like phenomena.

    PubMed

    Harbi, Djamel; Harrison, Paul M

    2014-01-01

    The universe of prion and prion-like phenomena has expanded significantly in the past several years. Here, we overview the challenges in classifying this data informatically, given that terms such as "prion-like", "prion-related" or "prion-forming" do not have a stable meaning in the scientific literature. We examine the spectrum of proteins that have been described in the literature as forming prions, and discuss how "prion" can have a range of meaning, with a strict definition being for demonstration of infection with in vitro-derived recombinant prions. We suggest that although prion/prion-like phenomena can largely be apportioned into a small number of broad groups dependent on the type of transmissibility evidence for them, as new phenomena are discovered in the coming years, a detailed ontological approach might be necessary that allows for subtle definition of different "flavors" of prion / prion-like phenomena.

  20. [Ne V] Emission in Optically Classified Starbursts

    NASA Astrophysics Data System (ADS)

    Abel, N. P.; Satyapal, S.

    2008-05-01

    Detecting active galactic nuclei (AGNs) in galaxies dominated by powerful nuclear star formation and extinction effects poses a unique challenge. Due to the longer wavelength emission and the ionization potential of Ne4+, infrared [Ne V] emission lines are thought to be excellent AGN diagnostics. However, stellar evolution models predict that Wolf-Rayet stars in young stellar clusters emit significant numbers of photons capable of creating Ne4+. Recent observations of [Ne V] emission in optically classified starburst galaxies require us to investigate whether [Ne V] can arise from star formation activity and not an AGN. In this work, we calculate the optical and IR spectrum of gas exposed to a young starburst and AGN SED. We find: (1) a range of parameters where [Ne V] emission can be explained solely by star formation and (2) a range of relative AGN to starburst luminosities that reproduces the [Ne V] observations, yet leaves the optical spectrum looking like a starburst. We also find that infrared emission-line diagnostics are much more sensitive to the AGNs than optical diagnostics, particularly for weak AGNs. We apply our model to the optically classified, yet [Ne V] emitting, starburst galaxy NGC 3621. We find, when taking the infrared and optical spectrum into account, ~30%-50% of the galaxy's total luminosity is due to an AGN. Our calculations show that [Ne V] emission is almost always the result of AGN activity. The models presented in this work can be used to determine the AGN contribution to a galaxy's power output.

  1. NOSS Altimeter Detailed Algorithm specifications

    NASA Technical Reports Server (NTRS)

    Hancock, D. W.; Mcmillan, J. D.

    1982-01-01

    The details of the algorithms and data sets required for satellite radar altimeter data processing are documented in a form suitable for (1) development of the benchmark software and (2) coding the operational software. The algorithms reported in detail are those established for altimeter processing. The algorithms which required some additional development before documenting for production were only scoped. The algorithms are divided into two levels of processing. The first level converts the data to engineering units and applies corrections for instrument variations. The second level provides geophysical measurements derived from altimeter parameters for oceanographic users.

  2. Evaluation and analysis of Seasat-A Scanning multichannel Microwave radiometer (SMMR) Antenna Pattern Correction (APC) algorithm. Sub-task 2: T sub B measured vs. T sub B calculated comparison results

    NASA Technical Reports Server (NTRS)

    Kitzis, J. L.; Kitzis, S. N.

    1979-01-01

    Interim Antenna Pattern Correction (APC) brightness temperature measurements for all ten SMMR channels are compared with calculated values generated from surface truth data. Plots and associated statistics are presented for the available points of coincidence between SMMR and surface truth measurements acquired for the Gulf of Alaska SEASAT Experiment. The most important conclusions of the study deal with the apparent existence of different instrument biases for each SMMR channel, and their variation across the scan.

  3. Beam-hardening correction by a surface fitting and phase classification by a least square support vector machine approach for tomography images of geological samples

    NASA Astrophysics Data System (ADS)

    Khan, F.; Enzmann, F.; Kersten, M.

    2015-12-01

    In X-ray computed microtomography (μXCT) image processing is the most important operation prior to image analysis. Such processing mainly involves artefact reduction and image segmentation. We propose a new two-stage post-reconstruction procedure of an image of a geological rock core obtained by polychromatic cone-beam μXCT technology. In the first stage, the beam-hardening (BH) is removed applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. The final BH-corrected image is extracted from the residual data, or the difference between the surface elevation values and the original grey-scale values. For the second stage, we propose using a least square support vector machine (a non-linear classifier algorithm) to segment the BH-corrected data as a pixel-based multi-classification task. A combination of the two approaches was used to classify a complex multi-mineral rock sample. The Matlab code for this approach is provided in the Appendix. A minor drawback is that the proposed segmentation algorithm may become computationally demanding in the case of a high dimensional training data set.

  4. Classifiers as a model-free group comparison test.

    PubMed

    Kim, Bommae; Oertzen, Timo von

    2017-04-03

    The conventional statistical methods to detect group differences assume correct model specification, including the origin of difference. Researchers should be able to identify a source of group differences and choose a corresponding method. In this paper, we propose a new approach of group comparison without model specification using classification algorithms in machine learning. In this approach, the classification accuracy is evaluated against a binomial distribution using Independent Validation. As an application example, we examined false-positive errors and statistical power of support vector machines to detect group differences in comparison to conventional statistical tests such as t test, Levene's test, K-S test, Fisher's z-transformation, and MANOVA. The SVMs detected group differences regardless of their origins (mean, variance, distribution shape, and covariance), and showed comparably consistent power across conditions. When a group difference originated from a single source, the statistical power of SVMs was lower than the most appropriate conventional test of the study condition; however, the power of SVMs increased when differences originated from multiple sources. Moreover, SVMs showed substantially improved performance with more variables than with fewer variables. Most importantly, SVMs were applicable to any types of data without sophisticated model specification. This study demonstrates a new application of classification algorithms as an alternative or complement to the conventional group comparison test. With the proposed approach, researchers can test two-sample data even when they are not certain which statistical test to use or when data violates the statistical assumptions of conventional methods.

  5. Design of a multi-signature ensemble classifier predicting neuroblastoma patients' outcome

    PubMed Central

    2012-01-01

    Background Neuroblastoma is the most common pediatric solid tumor of the sympathetic nervous system. Development of improved predictive tools for patients stratification is a crucial requirement for neuroblastoma therapy. Several studies utilized gene expression-based signatures to stratify neuroblastoma patients and demonstrated a clear advantage of adding genomic analysis to risk assessment. There is little overlapping among signatures and merging their prognostic potential would be advantageous. Here, we describe a new strategy to merge published neuroblastoma related gene signatures into a single, highly accurate, Multi-Signature Ensemble (MuSE)-classifier of neuroblastoma (NB) patients outcome. Methods Gene expression profiles of 182 neuroblastoma tumors, subdivided into three independent datasets, were used in the various phases of development and validation of neuroblastoma NB-MuSE-classifier. Thirty three signatures were evaluated for patients' outcome prediction using 22 classification algorithms each and generating 726 classifiers and prediction results. The best-performing algorithm for each signature was selected, validated on an independent dataset and the 20 signatures performing with an accuracy > = 80% were retained. Results We combined the 20 predictions associated to the corresponding signatures through the selection of the best performing algorithm into a single outcome predictor. The best performance was obtained by the Decision Table algorithm that produced the NB-MuSE-classifier characterized by an external validation accuracy of 94%. Kaplan-Meier curves and log-rank test demonstrated that patients with good and poor outcome prediction by the NB-MuSE-classifier have a significantly different survival (p < 0.0001). Survival curves constructed on subgroups of patients divided on the bases of known prognostic marker suggested an excellent stratification of localized and stage 4s tumors but more data are needed to prove this point. Conclusions The

  6. Design of radial basis function neural network classifier realized with the aid of data preprocessing techniques: design and analysis

    NASA Astrophysics Data System (ADS)

    Oh, Sung-Kwun; Kim, Wook-Dong; Pedrycz, Witold

    2016-05-01

    In this paper, we introduce a new architecture of optimized Radial Basis Function neural network classifier developed with the aid of fuzzy clustering and data preprocessing techniques and discuss its comprehensive design methodology. In the preprocessing part, the Linear Discriminant Analysis (LDA) or Principal Component Analysis (PCA) algorithm forms a front end of the network. The transformed data produced here are used as the inputs of the network. In the premise part, the Fuzzy C-Means (FCM) algorithm determines the receptive field associated with the condition part of the rules. The connection weights of the classifier are of functional nature and come as polynomial functions forming the consequent part. The Particle Swarm Optimization algorithm optimizes a number of essential parameters needed to improve the accuracy of the classifier. Those optimized parameters include the type of data preprocessing, the dimensionality of the feature vectors produced by the LDA (or PCA), the number of clusters (rules), the fuzzification coefficient used in the FCM algorithm and the orders of the polynomials of networks. The performance of the proposed classifier is reported for several benchmarking data-sets and is compared with the performance of other classifiers reported in the previous studies.

  7. 3D face recognition based on the hierarchical score-level fusion classifiers

    NASA Astrophysics Data System (ADS)

    Mráček, Štěpán.; Váša, Jan; Lankašová, Karolína; Drahanský, Martin; Doležel, Michal

    2014-05-01

    This paper describes the 3D face recognition algorithm that is based on the hierarchical score-level fusion clas-sifiers. In a simple (unimodal) biometric pipeline, the feature vector is extracted from the input data and subsequently compared with the template stored in the database. In our approachm, we utilize several feature extraction algorithms. We use 6 different image representations of the input 3D face data. Moreover, we are using Gabor and Gauss-Laguerre filter banks applied on the input image data that yield to 12 resulting feature vectors. Each representation is compared with corresponding counterpart from the biometric database. We also add the recognition based on the iso-geodesic curves. The final score-level fusion is performed on 13 comparison scores using the Support Vector Machine (SVM) classifier.

  8. Building Ultra-Low False Alarm Rate Support Vector Classifier Ensembles Using Random Subspaces

    SciTech Connect

    Chen, B Y; Lemmond, T D; Hanley, W G

    2008-10-06

    This paper presents the Cost-Sensitive Random Subspace Support Vector Classifier (CS-RS-SVC), a new learning algorithm that combines random subspace sampling and bagging with Cost-Sensitive Support Vector Classifiers to more effectively address detection applications burdened by unequal misclassification requirements. When compared to its conventional, non-cost-sensitive counterpart on a two-class signal detection application, random subspace sampling is shown to very effectively leverage the additional flexibility offered by the Cost-Sensitive Support Vector Classifier, yielding a more than four-fold increase in the detection rate at a false alarm rate (FAR) of zero. Moreover, the CS-RS-SVC is shown to be fairly robust to constraints on the feature subspace dimensionality, enabling reductions in computation time of up to 82% with minimal performance degradation.

  9. A decision support system using combined-classifier for high-speed data stream in smart grid

    NASA Astrophysics Data System (ADS)

    Yang, Hang; Li, Peng; He, Zhian; Guo, Xiaobin; Fong, Simon; Chen, Huajun

    2016-11-01

    Large volume of high-speed streaming data is generated by big power grids continuously. In order to detect and avoid power grid failure, decision support systems (DSSs) are commonly adopted in power grid enterprises. Among all the decision-making algorithms, incremental decision tree is the most widely used one. In this paper, we propose a combined classifier that is a composite of a cache-based classifier (CBC) and a main tree classifier (MTC). We integrate this classifier into a stream processing engine on top of the DSS such that high-speed steaming data can be transformed into operational intelligence efficiently. Experimental results show that our proposed classifier can return more accurate answers than other existing ones.

  10. Research on classified real-time flood forecasting framework based on K-means cluster and rough set.

    PubMed

    Xu, Wei; Peng, Yong

    2015-01-01

    This research presents a new classified real-time flood forecasting framework. In this framework, historical floods are classified by a K-means cluster according to the spatial and temporal distribution of precipitation, the time variance of precipitation intensity and other hydrological factors. Based on the classified results, a rough set is used to extract the identification rules for real-time flood forecasting. Then, the parameters of different categories within the conceptual hydrological model are calibrated using a genetic algorithm. In real-time forecasting, the corresponding category of parameters is selected for flood forecasting according to the obtained flood information. This research tests the new classified framework on Guanyinge Reservoir and compares the framework with the traditional flood forecasting method. It finds that the performance of the new classified framework is significantly better in terms of accuracy. Furthermore, the framework can be considered in a catchment with fewer historical floods.

  11. TPX correction coil studies

    SciTech Connect

    Hanson, J.D.

    1994-11-03

    Error correction coils are planned for the TPX (Tokamak Plasma Experiment) in order to avoid error field induced locked modes and disruption. The FT (Fix Tokamak) code is used to evaluate the ability of these correction coils to remove islands caused by symmetry breaking magnetic field errors. The proposed correction coils are capable of correcting a variety of error fields.

  12. Classifier models and architectures for EEG-based neonatal seizure detection.

    PubMed

    Greene, B R; Marnane, W P; Lightbody, G; Reilly, R B; Boylan, G B

    2008-10-01

    Neonatal seizures are the most common neurological emergency in the neonatal period and are associated with a poor long-term outcome. Early detection and treatment may improve prognosis. This paper aims to develop an optimal set of parameters and a comprehensive scheme for patient-independent multi-channel EEG-based neonatal seizure detection. We employed a dataset containing 411 neonatal seizures. The dataset consists of multi-channel EEG recordings with a mean duration of 14.8 h from 17 neonatal patients. Early-integration and late-integration classifier architectures were considered for the combination of information across EEG channels. Three classifier models based on linear discriminants, quadratic discriminants and regularized discriminants were employed. Furthermore, the effect of electrode montage was considered. The best performing seizure detection system was found to be an early integration configuration employing a regularized discriminant classifier model. A referential EEG montage was found to outperform the more standard bipolar electrode montage for automated neonatal seizure detection. A cross-fold validation estimate of the classifier performance for the best performing system yielded 81.03% of seizures correctly detected with a false detection rate of 3.82%. With post-processing, the false detection rate was reduced to 1.30% with 59.49% of seizures correctly detected. These results represent a comprehensive illustration that robust reliable patient-independent neonatal seizure detection is possible using multi-channel EEG.

  13. Comparison of two classifier training methodologies for underwater mine detection/classification

    NASA Astrophysics Data System (ADS)

    Bello, Martin G.

    2001-10-01

    We describe here the current form of Alphatech's image processing and neural network based algorithms for detection and classification of mines in side-scan sonar imagery, and results obtained form their application to three distinct databases. In particular, we contrast here results obtained from the use of a currently employed 'baseline' multilayer perceptron classifier training approach, with the use of a state of the art commercial neural network package, NeuralSIM, developed by Neuralware, Inc.

  14. Evolutionary optimization of radial basis function classifiers for data mining applications.

    PubMed

    Buchtala, Oliver; Klimek, Manuel; Sick, Bernhard

    2005-10-01

    In many data mining applications that address classification problems, feature and model selection are considered as key tasks. That is, appropriate input features of the classifier must be selected from a given (and often large) set of possible features and structure parameters of the classifier must be adapted with respect to these features and a given data set. This paper describes an evolutionary algorithm (EA) that performs feature and model selection simultaneously for radial basis function (RBF) classifiers. In order to reduce the optimization effort, various techniques are integrated that accelerate and improve the EA significantly: hybrid training of RBF networks, lazy evaluation, consideration of soft constraints by means of penalty terms, and temperature-based adaptive control of the EA. The feasibility and the benefits of the approach are demonstrated by means of four data mining problems: intrusion detection in computer networks, biometric signature verification, customer acquisition with direct marketing methods, and optimization of chemical production processes. It is shown that, compared to earlier EA-based RBF optimization techniques, the runtime is reduced by up to 99% while error rates are lowered by up to 86%, depending on the application. The algorithm is independent of specific applications so that many ideas and solutions can be transferred to other classifier paradigms.

  15. Ensemble of One-Class Classifiers for Personal Risk Detection Based on Wearable Sensor Data

    PubMed Central

    Rodríguez, Jorge; Barrera-Animas, Ari Y.; Trejo, Luis A.; Medina-Pérez, Miguel Angel; Monroy, Raúl

    2016-01-01

    This study introduces the One-Class K-means with Randomly-projected features Algorithm (OCKRA). OCKRA is an ensemble of one-class classifiers built over multiple projections of a dataset according to random feature subsets. Algorithms found in the literature spread over a wide range of applications where ensembles of one-class classifiers have been satisfactorily applied; however, none is oriented to the area under our study: personal risk detection. OCKRA has been designed with the aim of improving the detection performance in the problem posed by the Personal RIsk DEtection(PRIDE) dataset. PRIDE was built based on 23 test subjects, where the data for each user were captured using a set of sensors embedded in a wearable band. The performance of OCKRA was compared against support vector machine and three versions of the Parzen window classifier. On average, experimental results show that OCKRA outperformed the other classifiers for at least 0.53% of the area under the curve (AUC). In addition, OCKRA achieved an AUC above 90% for more than 57% of the users. PMID:27690054

  16. Ensemble of One-Class Classifiers for Personal Risk Detection Based on Wearable Sensor Data.

    PubMed

    Rodríguez, Jorge; Barrera-Animas, Ari Y; Trejo, Luis A; Medina-Pérez, Miguel Angel; Monroy, Raúl

    2016-09-29

    This study introduces the One-Class K-means with Randomly-projected features Algorithm (OCKRA). OCKRA is an ensemble of one-class classifiers built over multiple projections of a dataset according to random feature subsets. Algorithms found in the literature spread over a wide range of applications where ensembles of one-class classifiers have been satisfactorily applied; however, none is oriented to the area under our study: personal risk detection. OCKRA has been designed with the aim of improving the detection performance in the problem posed by the Personal RIsk DEtection(PRIDE) dataset. PRIDE was built based on 23 test subjects, where the data for each user were captured using a set of sensors embedded in a wearable band. The performance of OCKRA was compared against support vector machine and three versions of the Parzen window classifier. On average, experimental results show that OCKRA outperformed the other classifiers for at least 0.53% of the area under the curve (AUC). In addition, OCKRA achieved an AUC above 90% for more than 57% of the users.

  17. An Optimal Class Association Rule Algorithm

    NASA Astrophysics Data System (ADS)

    Jean Claude, Turiho; Sheng, Yang; Chuang, Li; Kaia, Xie

    Classification and association rule mining algorithms are two important aspects of data mining. Class association rule mining algorithm is a promising approach for it involves the use of association rule mining algorithm to discover classification rules. This paper introduces an optimal class association rule mining algorithm known as OCARA. It uses optimal association rule mining algorithm and the rule set is sorted by priority of rules resulting into a more accurate classifier. It outperforms the C4.5, CBA, RMR on UCI eight data sets, which is proved by experimental results.

  18. Quantum Algorithms

    NASA Technical Reports Server (NTRS)

    Abrams, D.; Williams, C.

    1999-01-01

    This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.

  19. Hierarchical Wireless Multimedia Sensor Networks for Collaborative Hybrid Semi-Supervised Classifier Learning

    PubMed Central

    Wang, Xue; Wang, Sheng; Bi, Daowei; Ding, Liang

    2007-01-01

    Wireless multimedia sensor networks (WMSN) have recently emerged as one of the most important technologies, driven by the powerful multimedia signal acquisition and processing abilities. Target classification is an important research issue addressed in WMSN, which has strict requirement in robustness, quickness and accuracy. This paper proposes a collaborative semi-supervised classifier learning algorithm to achieve durative online learning for support vector machine (SVM) based robust target classification. The proposed algorithm incrementally carries out the semi-supervised classifier learning process in hierarchical WMSN, with the collaboration of multiple sensor nodes in a hybrid computing paradigm. For decreasing the energy consumption and improving the performance, some metrics are introduced to evaluate the effectiveness of the samples in specific sensor nodes, and a sensor node selection strategy is also proposed to reduce the impact of inevitable missing detection and false detection. With the ant optimization routing, the learning process is implemented with the selected sensor nodes, which can decrease the energy consumption. Experimental results demonstrate that the collaborative hybrid semi-supervised classifier learning algorithm can effectively implement target classification in hierarchical WMSN. It has outstanding performance in terms of energy efficiency and time cost, which verifies the effectiveness of the sensor nodes selection and ant optimization routing.

  20. Hologram production and representation for corrected image

    NASA Astrophysics Data System (ADS)

    Jiao, Gui Chao; Zhang, Rui; Su, Xue Mei

    2015-12-01

    In this paper, a CCD sensor device is used to record the distorted homemade grid images which are taken by a wide angle camera. The distorted images are corrected by using methods of position calibration and correction of gray with vc++ 6.0 and opencv software. Holography graphes for the corrected pictures are produced. The clearly reproduced images are obtained where Fresnel algorithm is used in graph processing by reducing the object and reference light from Fresnel diffraction to delete zero-order part of the reproduced images. The investigation is useful in optical information processing and image encryption transmission.