Sample records for good classification performance

  1. Performance of Four Frailty Classifications in Older Patients With Cancer: Prospective Elderly Cancer Patients Cohort Study.

    PubMed

    Ferrat, Emilie; Paillaud, Elena; Caillet, Philippe; Laurent, Marie; Tournigand, Christophe; Lagrange, Jean-Léon; Droz, Jean-Pierre; Balducci, Lodovico; Audureau, Etienne; Canouï-Poitrine, Florence; Bastuji-Garin, Sylvie

    2017-03-01

    Purpose Frailty classifications of older patients with cancer have been developed to assist physicians in selecting cancer treatments and geriatric interventions. They have not been compared, and their performance in predicting outcomes has not been assessed. Our objectives were to assess agreement among four classifications and to compare their predictive performance in a large cohort of in- and outpatients with various cancers. Patients and Methods We prospectively included 1,021 patients age 70 years or older who had solid or hematologic malignancies and underwent a geriatric assessment in one of two French teaching hospitals between 2007 and 2012. Among them, 763 were assessed using four classifications: Balducci, International Society of Geriatric Oncology (SIOG) 1, SIOG2, and a latent class typology. Agreement was assessed using the κ statistic. Outcomes were 1-year mortality and 6-month unscheduled admissions. Results All four classifications had good discrimination for 1-year mortality (C-index ≥ 0.70); discrimination was best with SIOG1. For 6-month unscheduled admissions, discrimination was good with all four classifications (C-index ≥ 0.70). For classification into three (fit, vulnerable, or frail) or two categories (fit v vulnerable or frail and fit or vulnerable v frail), agreement among the four classifications ranged from very poor (κ ≤ 0.20) to good (0.60 < κ ≤ 0.80). Agreement was best between SIOG1 and the latent class typology and between SIOG1 and Balducci. Conclusion These four frailty classifications have good prognostic performance among older in- and outpatients with various cancers. They may prove useful in decision making about cancer treatments and geriatric interventions and/or in stratifying older patients with cancer in clinical trials.

  2. Research on Classification of Chinese Text Data Based on SVM

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Yu, Hongzhi; Wan, Fucheng; Xu, Tao

    2017-09-01

    Data Mining has important application value in today’s industry and academia. Text classification is a very important technology in data mining. At present, there are many mature algorithms for text classification. KNN, NB, AB, SVM, decision tree and other classification methods all show good classification performance. Support Vector Machine’ (SVM) classification method is a good classifier in machine learning research. This paper will study the classification effect based on the SVM method in the Chinese text data, and use the support vector machine method in the chinese text to achieve the classify chinese text, and to able to combination of academia and practical application.

  3. Describing Peripancreatic Collections According to the Revised Atlanta Classification of Acute Pancreatitis: An International Interobserver Agreement Study.

    PubMed

    Bouwense, Stefan A; van Brunschot, Sandra; van Santvoort, Hjalmar C; Besselink, Marc G; Bollen, Thomas L; Bakker, Olaf J; Banks, Peter A; Boermeester, Marja A; Cappendijk, Vincent C; Carter, Ross; Charnley, Richard; van Eijck, Casper H; Freeny, Patrick C; Hermans, John J; Hough, David M; Johnson, Colin D; Laméris, Johan S; Lerch, Markus M; Mayerle, Julia; Mortele, Koenraad J; Sarr, Michael G; Stedman, Brian; Vege, Santhi Swaroop; Werner, Jens; Dijkgraaf, Marcel G; Gooszen, Hein G; Horvath, Karen D

    2017-08-01

    Severe acute pancreatitis is associated with peripancreatic morphologic changes as seen on imaging. Uniform communication regarding these morphologic findings is crucial for accurate diagnosis and treatment. For the original 1992 Atlanta classification, interobserver agreement is poor. We hypothesized that for the revised Atlanta classification, interobserver agreement will be better. An international, interobserver agreement study was performed among expert and nonexpert radiologists (n = 14), surgeons (n = 15), and gastroenterologists (n = 8). Representative computed tomographies of all stages of acute pancreatitis were selected from 55 patients and were assessed according to the revised Atlanta classification. The interobserver agreement was calculated among all reviewers and subgroups, that is, expert and nonexpert reviewers; interobserver agreement was defined as poor (≤0.20), fair (0.21-0.40), moderate (0.41-0.60), good (0.61-0.80), or very good (0.81-1.00). Interobserver agreement among all reviewers was good (0.75 [standard deviation, 0.21]) for describing the type of acute pancreatitis and good (0.62 [standard deviation, 0.19]) for the type of peripancreatic collection. Expert radiologists showed the best and nonexpert clinicians the lowest interobserver agreement. Interobserver agreement was good for the revised Atlanta classification, supporting the importance for widespread adaption of this revised classification for clinical and research communications.

  4. Impact of the revised International Prognostic Scoring System, cytogenetics and monosomal karyotype on outcome after allogeneic stem cell transplantation for myelodysplastic syndromes and secondary acute myeloid leukemia evolving from myelodysplastic syndromes: a retrospective multicenter study of the European Society of Blood and Marrow Transplantation

    PubMed Central

    Koenecke, Christian; Göhring, Gudrun; de Wreede, Liesbeth C.; van Biezen, Anja; Scheid, Christof; Volin, Liisa; Maertens, Johan; Finke, Jürgen; Schaap, Nicolaas; Robin, Marie; Passweg, Jakob; Cornelissen, Jan; Beelen, Dietrich; Heuser, Michael; de Witte, Theo; Kröger, Nicolaus

    2015-01-01

    The aim of this study was to determine the impact of the revised 5-group International Prognostic Scoring System cytogenetic classification on outcome after allogeneic stem cell transplantation in patients with myelodysplastic syndromes or secondary acute myeloid leukemia who were reported to the European Society for Blood and Marrow Transplantation database. A total of 903 patients had sufficient cytogenetic information available at stem cell transplantation to be classified according to the 5-group classification. Poor and very poor risk according to this classification was an independent predictor of shorter relapse-free survival (hazard ratio 1.40 and 2.14), overall survival (hazard ratio 1.38 and 2.14), and significantly higher cumulative incidence of relapse (hazard ratio 1.64 and 2.76), compared to patients with very good, good or intermediate risk. When comparing the predictive performance of a series of Cox models both for relapse-free survival and for overall survival, a model with simplified 5-group cytogenetics (merging very good, good and intermediate cytogenetics) performed best. Furthermore, monosomal karyotype is an additional negative predictor for outcome within patients of the poor, but not the very poor risk group of the 5-group classification. The revised International Prognostic Scoring System cytogenetic classification allows patients with myelodysplastic syndromes to be separated into three groups with clearly different outcomes after stem cell transplantation. Poor and very poor risk cytogenetics were strong predictors of poor patient outcome. The new cytogenetic classification added value to prediction of patient outcome compared to prediction models using only traditional risk factors or the 3-group International Prognostic Scoring System cytogenetic classification. PMID:25552702

  5. Conjugate-Gradient Neural Networks in Classification of Multisource and Very-High-Dimensional Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Benediktsson, J. A.; Swain, P. H.; Ersoy, O. K.

    1993-01-01

    Application of neural networks to classification of remote sensing data is discussed. Conventional two-layer backpropagation is found to give good results in classification of remote sensing data but is not efficient in training. A more efficient variant, based on conjugate-gradient optimization, is used for classification of multisource remote sensing and geographic data and very-high-dimensional data. The conjugate-gradient neural networks give excellent performance in classification of multisource data, but do not compare as well with statistical methods in classification of very-high-dimentional data.

  6. Semi-supervised vibration-based classification and condition monitoring of compressors

    NASA Astrophysics Data System (ADS)

    Potočnik, Primož; Govekar, Edvard

    2017-09-01

    Semi-supervised vibration-based classification and condition monitoring of the reciprocating compressors installed in refrigeration appliances is proposed in this paper. The method addresses the problem of industrial condition monitoring where prior class definitions are often not available or difficult to obtain from local experts. The proposed method combines feature extraction, principal component analysis, and statistical analysis for the extraction of initial class representatives, and compares the capability of various classification methods, including discriminant analysis (DA), neural networks (NN), support vector machines (SVM), and extreme learning machines (ELM). The use of the method is demonstrated on a case study which was based on industrially acquired vibration measurements of reciprocating compressors during the production of refrigeration appliances. The paper presents a comparative qualitative analysis of the applied classifiers, confirming the good performance of several nonlinear classifiers. If the model parameters are properly selected, then very good classification performance can be obtained from NN trained by Bayesian regularization, SVM and ELM classifiers. The method can be effectively applied for the industrial condition monitoring of compressors.

  7. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    PubMed

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Learning to Classify with Possible Sensor Failures

    DTIC Science & Technology

    2014-05-04

    SVMs), have demonstrated good classification performance when the training data is representative of the test data [1, 2, 3]. However, in many real...Detection of people and animals using non- imaging sensors,” Information Fusion (FUSION), 2011 Proceedings of the 14th International Conference on, pp...classification methods in terms of both classification accuracy and anomaly detection rate using 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13

  9. A decision support model for investment on P2P lending platform.

    PubMed

    Zeng, Xiangxiang; Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao

    2017-01-01

    Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace-Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone.

  10. A decision support model for investment on P2P lending platform

    PubMed Central

    Liu, Li; Leung, Stephen; Du, Jiangze; Wang, Xun; Li, Tao

    2017-01-01

    Peer-to-peer (P2P) lending, as a novel economic lending model, has triggered new challenges on making effective investment decisions. In a P2P lending platform, one lender can invest N loans and a loan may be accepted by M investors, thus forming a bipartite graph. Basing on the bipartite graph model, we built an iteration computation model to evaluate the unknown loans. To validate the proposed model, we perform extensive experiments on real-world data from the largest American P2P lending marketplace—Prosper. By comparing our experimental results with those obtained by Bayes and Logistic Regression, we show that our computation model can help borrowers select good loans and help lenders make good investment decisions. Experimental results also show that the Logistic classification model is a good complement to our iterative computation model, which motivates us to integrate the two classification models. The experimental results of the hybrid classification model demonstrate that the logistic classification model and our iteration computation model are complementary to each other. We conclude that the hybrid model (i.e., the integration of iterative computation model and Logistic classification model) is more efficient and stable than the individual model alone. PMID:28877234

  11. Machine Learning Algorithms for Automatic Classification of Marmoset Vocalizations

    PubMed Central

    Ribeiro, Sidarta; Pereira, Danillo R.; Papa, João P.; de Albuquerque, Victor Hugo C.

    2016-01-01

    Automatic classification of vocalization type could potentially become a useful tool for acoustic the monitoring of captive colonies of highly vocal primates. However, for classification to be useful in practice, a reliable algorithm that can be successfully trained on small datasets is necessary. In this work, we consider seven different classification algorithms with the goal of finding a robust classifier that can be successfully trained on small datasets. We found good classification performance (accuracy > 0.83 and F1-score > 0.84) using the Optimum Path Forest classifier. Dataset and algorithms are made publicly available. PMID:27654941

  12. A classification system for characterization of physical and non-physical work factors.

    PubMed

    Genaidy, A; Karwowski, W; Succop, P; Kwon, Y G; Alhemoud, A; Goyal, D

    2000-01-01

    A comprehensive evaluation of work-related performance factors is a prerequisite to developing integrated and long-term solutions to workplace performance improvement. This paper describes a work-factor classification system that categorizes the entire domain of workplace factors impacting performance. A questionnaire-based instrument was developed to implement this classification system in industry. Fifty jobs were evaluated in 4 different service and manufacturing companies using the proposed questionnaire-based instrument. The reliability coefficients obtained from the analyzed jobs were considered good (0.589 to 0.862). In general, the physical work factors resulted in higher reliability coefficients (0.847 to 0.862) than non-physical work factors (0.589 to 0.768).

  13. Integration of heterogeneous features for remote sensing scene classification

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Xiong, Xingnan; Ning, Chen; Shi, Aiye; Lv, Guofang

    2018-01-01

    Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.

  14. Marker-Based Hierarchical Segmentation and Classification Approach for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.; Benediktsson, Jon Atli; Chanussot, Jocelyn

    2011-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which is a combination of hierarchical step-wise optimization and spectral clustering, has given good performances for hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. First, pixelwise classification is performed and the most reliably classified pixels are selected as markers, with the corresponding class labels. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. The experimental results show that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for hyperspectral image analysis.

  15. Classification capacity of a modular neural network implementing neurally inspired architecture and training rules.

    PubMed

    Poirazi, Panayiota; Neocleous, Costas; Pattichis, Costantinos S; Schizas, Christos N

    2004-05-01

    A three-layer neural network (NN) with novel adaptive architecture has been developed. The hidden layer of the network consists of slabs of single neuron models, where neurons within a slab--but not between slabs--have the same type of activation function. The network activation functions in all three layers have adaptable parameters. The network was trained using a biologically inspired, guided-annealing learning rule on a variety of medical data. Good training/testing classification performance was obtained on all data sets tested. The performance achieved was comparable to that of SVM classifiers. It was shown that the adaptive network architecture, inspired from the modular organization often encountered in the mammalian cerebral cortex, can benefit classification performance.

  16. Use of the color trails test as an embedded measure of performance validity.

    PubMed

    Henry, George K; Algina, James

    2013-01-01

    One hundred personal injury litigants and disability claimants referred for a forensic neuropsychological evaluation were administered both portions of the Color Trails Test (CTT) as part of a more comprehensive battery of standardized tests. Subjects who failed two or more free-standing tests of cognitive performance validity formed the Failed Performance Validity (FPV) group, while subjects who passed all free-standing performance validity measures were assigned to the Passed Performance Validity (PPV) group. A cutscore of ≥45 seconds to complete Color Trails 1 (CT1) was associated with a classification accuracy of 78%, good sensitivity (66%) and high specificity (90%), while a cutscore of ≥84 seconds to complete Color Trails 2 (CT2) was associated with a classification accuracy of 82%, good sensitivity (74%) and high specificity (90%). A CT1 cutscore of ≥58 seconds, and a CT2 cutscore ≥100 seconds was associated with 100% positive predictive power at base rates from 20 to 50%.

  17. Unresolved Galaxy Classifier for ESA/Gaia mission: Support Vector Machines approach

    NASA Astrophysics Data System (ADS)

    Bellas-Velidis, Ioannis; Kontizas, Mary; Dapergolas, Anastasios; Livanou, Evdokia; Kontizas, Evangelos; Karampelas, Antonios

    A software package Unresolved Galaxy Classifier (UGC) is being developed for the ground-based pipeline of ESA's Gaia mission. It aims to provide an automated taxonomic classification and specific parameters estimation analyzing Gaia BP/RP instrument low-dispersion spectra of unresolved galaxies. The UGC algorithm is based on a supervised learning technique, the Support Vector Machines (SVM). The software is implemented in Java as two separate modules. An offline learning module provides functions for SVM-models training. Once trained, the set of models can be repeatedly applied to unknown galaxy spectra by the pipeline's application module. A library of galaxy models synthetic spectra, simulated for the BP/RP instrument, is used to train and test the modules. Science tests show a very good classification performance of UGC and relatively good regression performance, except for some of the parameters. Possible approaches to improve the performance are discussed.

  18. Comprehensive decision tree models in bioinformatics.

    PubMed

    Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter

    2012-01-01

    Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly redundant attributes that are very common in bioinformatics.

  19. Comprehensive Decision Tree Models in Bioinformatics

    PubMed Central

    Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter

    2012-01-01

    Purpose Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. Methods This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. Results The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. Conclusions The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly redundant attributes that are very common in bioinformatics. PMID:22479449

  20. Recursive feature selection with significant variables of support vectors.

    PubMed

    Tsai, Chen-An; Huang, Chien-Hsun; Chang, Ching-Wei; Chen, Chun-Houh

    2012-01-01

    The development of DNA microarray makes researchers screen thousands of genes simultaneously and it also helps determine high- and low-expression level genes in normal and disease tissues. Selecting relevant genes for cancer classification is an important issue. Most of the gene selection methods use univariate ranking criteria and arbitrarily choose a threshold to choose genes. However, the parameter setting may not be compatible to the selected classification algorithms. In this paper, we propose a new gene selection method (SVM-t) based on the use of t-statistics embedded in support vector machine. We compared the performance to two similar SVM-based methods: SVM recursive feature elimination (SVMRFE) and recursive support vector machine (RSVM). The three methods were compared based on extensive simulation experiments and analyses of two published microarray datasets. In the simulation experiments, we found that the proposed method is more robust in selecting informative genes than SVMRFE and RSVM and capable to attain good classification performance when the variations of informative and noninformative genes are different. In the analysis of two microarray datasets, the proposed method yields better performance in identifying fewer genes with good prediction accuracy, compared to SVMRFE and RSVM.

  1. Reduction from cost-sensitive ordinal ranking to weighted binary classification.

    PubMed

    Lin, Hsuan-Tien; Li, Ling

    2012-05-01

    We present a reduction framework from ordinal ranking to binary classification. The framework consists of three steps: extracting extended examples from the original examples, learning a binary classifier on the extended examples with any binary classification algorithm, and constructing a ranker from the binary classifier. Based on the framework, we show that a weighted 0/1 loss of the binary classifier upper-bounds the mislabeling cost of the ranker, both error-wise and regret-wise. Our framework allows not only the design of good ordinal ranking algorithms based on well-tuned binary classification approaches, but also the derivation of new generalization bounds for ordinal ranking from known bounds for binary classification. In addition, our framework unifies many existing ordinal ranking algorithms, such as perceptron ranking and support vector ordinal regression. When compared empirically on benchmark data sets, some of our newly designed algorithms enjoy advantages in terms of both training speed and generalization performance over existing algorithms. In addition, the newly designed algorithms lead to better cost-sensitive ordinal ranking performance, as well as improved listwise ranking performance.

  2. Classification of Company Performance using Weighted Probabilistic Neural Network

    NASA Astrophysics Data System (ADS)

    Yasin, Hasbi; Waridi Basyiruddin Arifin, Adi; Warsito, Budi

    2018-05-01

    Classification of company performance can be judged by looking at its financial status, whether good or bad state. Classification of company performance can be achieved by some approach, either parametric or non-parametric. Neural Network is one of non-parametric methods. One of Artificial Neural Network (ANN) models is Probabilistic Neural Network (PNN). PNN consists of four layers, i.e. input layer, pattern layer, addition layer, and output layer. The distance function used is the euclidean distance and each class share the same values as their weights. In this study used PNN that has been modified on the weighting process between the pattern layer and the addition layer by involving the calculation of the mahalanobis distance. This model is called the Weighted Probabilistic Neural Network (WPNN). The results show that the company's performance modeling with the WPNN model has a very high accuracy that reaches 100%.

  3. CLARIPED: a new tool for risk classification in pediatric emergencies.

    PubMed

    Magalhães-Barbosa, Maria Clara de; Prata-Barbosa, Arnaldo; Alves da Cunha, Antonio José Ledo; Lopes, Cláudia de Souza

    2016-09-01

    To present a new pediatric risk classification tool, CLARIPED, and describe its development steps. Development steps: (i) first round of discussion among experts, first prototype; (ii) pre-test of reliability, 36 hypothetical cases; (iii) second round of discussion to perform adjustments; (iv) team training; (v) pre-test with patients in real time; (vi) third round of discussion to perform new adjustments; (vii) final pre-test of validity (20% of medical treatments in five days). CLARIPED features five urgency categories: Red (Emergency), Orange (very urgent), Yellow (urgent), Green (little urgent) and Blue (not urgent). The first classification step includes the measurement of four vital signs (Vipe score); the second step consists in the urgency discrimination assessment. Each step results in assigning a color, selecting the most urgent one for the final classification. Each color corresponds to a maximum waiting time for medical care and referral to the most appropriate physical area for the patient's clinical condition. The interobserver agreement was substantial (kappa=0.79) and the final pre-test, with 82 medical treatments, showed good correlation between the proportion of patients in each urgency category and the number of used resources (p<0.001). CLARIPED is an objective and easy-to-use tool for simple risk classification, of which pre-tests suggest good reliability and validity. Larger-scale studies on its validity and reliability in different health contexts are ongoing and can contribute to the implementation of a nationwide pediatric risk classification system. Copyright © 2016 Sociedade de Pediatria de São Paulo. Publicado por Elsevier Editora Ltda. All rights reserved.

  4. Modeling ready biodegradability of fragrance materials.

    PubMed

    Ceriani, Lidia; Papa, Ester; Kovarich, Simona; Boethling, Robert; Gramatica, Paola

    2015-06-01

    In the present study, quantitative structure activity relationships were developed for predicting ready biodegradability of approximately 200 heterogeneous fragrance materials. Two classification methods, classification and regression tree (CART) and k-nearest neighbors (kNN), were applied to perform the modeling. The models were validated with multiple external prediction sets, and the structural applicability domain was verified by the leverage approach. The best models had good sensitivity (internal ≥80%; external ≥68%), specificity (internal ≥80%; external 73%), and overall accuracy (≥75%). Results from the comparison with BIOWIN global models, based on group contribution method, show that specific models developed in the present study perform better in prediction than BIOWIN6, in particular for the correct classification of not readily biodegradable fragrance materials. © 2015 SETAC.

  5. A Discriminant Distance Based Composite Vector Selection Method for Odor Classification

    PubMed Central

    Choi, Sang-Il; Jeong, Gu-Min

    2014-01-01

    We present a composite vector selection method for an effective electronic nose system that performs well even in noisy environments. Each composite vector generated from a electronic nose data sample is evaluated by computing the discriminant distance. By quantitatively measuring the amount of discriminative information in each composite vector, composite vectors containing informative variables can be distinguished and the final composite features for odor classification are extracted using the selected composite vectors. Using the only informative composite vectors can be also helpful to extract better composite features instead of using all the generated composite vectors. Experimental results with different volatile organic compound data show that the proposed system has good classification performance even in a noisy environment compared to other methods. PMID:24747735

  6. Using Web-Based Key Character and Classification Instruction for Teaching Undergraduate Students Insect Identification

    NASA Astrophysics Data System (ADS)

    Golick, Douglas A.; Heng-Moss, Tiffany M.; Steckelberg, Allen L.; Brooks, David. W.; Higley, Leon G.; Fowler, David

    2013-08-01

    The purpose of the study was to determine whether undergraduate students receiving web-based instruction based on traditional, key character, or classification instruction differed in their performance of insect identification tasks. All groups showed a significant improvement in insect identifications on pre- and post-two-dimensional picture specimen quizzes. The study also determined student performance on insect identification tasks was not as good as for family-level identification as compared to broader insect orders and arthropod classification identification tasks. Finally, students erred significantly more by misidentification than misspelling specimen names on prepared specimen quizzes. Results of this study support that short web-based insect identification exercises can improve insect identification performance. Also included is a discussion of how these results can be used in teaching and future research on biological identification.

  7. Comparing Features for Classification of MEG Responses to Motor Imagery.

    PubMed

    Halme, Hanna-Leena; Parkkonen, Lauri

    2016-01-01

    Motor imagery (MI) with real-time neurofeedback could be a viable approach, e.g., in rehabilitation of cerebral stroke. Magnetoencephalography (MEG) noninvasively measures electric brain activity at high temporal resolution and is well-suited for recording oscillatory brain signals. MI is known to modulate 10- and 20-Hz oscillations in the somatomotor system. In order to provide accurate feedback to the subject, the most relevant MI-related features should be extracted from MEG data. In this study, we evaluated several MEG signal features for discriminating between left- and right-hand MI and between MI and rest. MEG was measured from nine healthy participants imagining either left- or right-hand finger tapping according to visual cues. Data preprocessing, feature extraction and classification were performed offline. The evaluated MI-related features were power spectral density (PSD), Morlet wavelets, short-time Fourier transform (STFT), common spatial patterns (CSP), filter-bank common spatial patterns (FBCSP), spatio-spectral decomposition (SSD), and combined SSD+CSP, CSP+PSD, CSP+Morlet, and CSP+STFT. We also compared four classifiers applied to single trials using 5-fold cross-validation for evaluating the classification accuracy and its possible dependence on the classification algorithm. In addition, we estimated the inter-session left-vs-right accuracy for each subject. The SSD+CSP combination yielded the best accuracy in both left-vs-right (mean 73.7%) and MI-vs-rest (mean 81.3%) classification. CSP+Morlet yielded the best mean accuracy in inter-session left-vs-right classification (mean 69.1%). There were large inter-subject differences in classification accuracy, and the level of the 20-Hz suppression correlated significantly with the subjective MI-vs-rest accuracy. Selection of the classification algorithm had only a minor effect on the results. We obtained good accuracy in sensor-level decoding of MI from single-trial MEG data. Feature extraction methods utilizing both the spatial and spectral profile of MI-related signals provided the best classification results, suggesting good performance of these methods in an online MEG neurofeedback system.

  8. A Marker-Based Approach for the Automated Selection of a Single Segmentation from a Hierarchical Set of Image Segmentations

    NASA Technical Reports Server (NTRS)

    Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.

    2012-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.

  9. Statistical analysis on the concordance of the radiological evaluation of fractures of the distal radius subjected to traction☆

    PubMed Central

    Machado, Daniel Gonçalves; da Cruz Cerqueira, Sergio Auto; de Lima, Alexandre Fernandes; de Mathias, Marcelo Bezerra; Aramburu, José Paulo Gabbi; Rodarte, Rodrigo Ribeiro Pinho

    2016-01-01

    Objective The objective of this study was to evaluate the current classifications for fractures of the distal extremity of the radius, since the classifications made using traditional radiographs in anteroposterior and lateral views have been questioned regarding their reproducibility. In the literature, it has been suggested that other options are needed, such as use of preoperative radiographs on fractures of the distal radius subjected to traction, with stratification by the evaluators. The aim was to demonstrate which classification systems present better statistical reliability. Results In the Universal classification, the results from the third-year resident group (R3) and from the group of more experienced evaluators (Staff) presented excellent correlation, with a statistically significant p-value (p < 0.05). Neither of the groups presented a statistically significant result through the Frykman classification. In the AO classification, there were high correlations in the R3 and Staff groups (respectively 0.950 and 0.800), with p-values lower than 0.05 (respectively <0.001 and 0.003). Conclusion It can be concluded that radiographs performed under traction showed good concordance in the Staff group and in the R3 group, and that this is a good tactic for radiographic evaluations of fractures of the distal extremity of the radius. PMID:26962498

  10. MRI classification system (MRICS) for children with cerebral palsy: development, reliability, and recommendations.

    PubMed

    Himmelmann, Kate; Horber, Veronka; De La Cruz, Javier; Horridge, Karen; Mejaski-Bosnjak, Vlatka; Hollody, Katalin; Krägeloh-Mann, Ingeborg

    2017-01-01

    To develop and evaluate a classification system for magnetic resonance imaging (MRI) findings of children with cerebral palsy (CP) that can be used in CP registers. The classification system was based on pathogenic patterns occurring in different periods of brain development. The MRI classification system (MRICS) consists of five main groups: maldevelopments, predominant white matter injury, predominant grey matter injury, miscellaneous, and normal findings. A detailed manual for the descriptions of these patterns was developed, including test cases (www.scpenetwork.eu/en/my-scpe/rtm/neuroimaging/cp-neuroimaging/). A literature review was performed and MRICS was compared with other classification systems. An exercise was carried out to check applicability and interrater reliability. Professionals working with children with CP or in CP registers were invited to participate in the exercise and chose to classify either 18 MRIs or MRI reports of children with CP. Classification systems in the literature were compatible with MRICS and harmonization possible. Interrater reliability was found to be good overall (k=0.69; 0.54-0.82) among the 41 participants and very good (k=0.81; 0.74-0.92) using the classification based on imaging reports. Surveillance of Cerebral Palsy in Europe (SCPE) proposes the MRICS as a reliable tool. Together with its manual it is simple to apply for CP registers. © 2016 Mac Keith Press.

  11. Land use/cover classification in the Brazilian Amazon using satellite images.

    PubMed

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant'anna, Sidnei João Siqueira

    2012-09-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data.

  12. Land use/cover classification in the Brazilian Amazon using satellite images

    PubMed Central

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant’Anna, Sidnei João Siqueira

    2013-01-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data. PMID:24353353

  13. Classification of remotely sensed data using OCR-inspired neural network techniques. [Optical Character Recognition

    NASA Technical Reports Server (NTRS)

    Kiang, Richard K.

    1992-01-01

    Neural networks have been applied to classifications of remotely sensed data with some success. To improve the performance of this approach, an examination was made of how neural networks are applied to the optical character recognition (OCR) of handwritten digits and letters. A three-layer, feedforward network, along with techniques adopted from OCR, was used to classify Landsat-4 Thematic Mapper data. Good results were obtained. To overcome the difficulties that are characteristic of remote sensing applications and to attain significant improvements in classification accuracy, a special network architecture may be required.

  14. Particle Swarm Optimization approach to defect detection in armour ceramics.

    PubMed

    Kesharaju, Manasa; Nagarajah, Romesh

    2017-03-01

    In this research, various extracted features were used in the development of an automated ultrasonic sensor based inspection system that enables defect classification in each ceramic component prior to despatch to the field. Classification is an important task and large number of irrelevant, redundant features commonly introduced to a dataset reduces the classifiers performance. Feature selection aims to reduce the dimensionality of the dataset while improving the performance of a classification system. In the context of a multi-criteria optimization problem (i.e. to minimize classification error rate and reduce number of features) such as one discussed in this research, the literature suggests that evolutionary algorithms offer good results. Besides, it is noted that Particle Swarm Optimization (PSO) has not been explored especially in the field of classification of high frequency ultrasonic signals. Hence, a binary coded Particle Swarm Optimization (BPSO) technique is investigated in the implementation of feature subset selection and to optimize the classification error rate. In the proposed method, the population data is used as input to an Artificial Neural Network (ANN) based classification system to obtain the error rate, as ANN serves as an evaluator of PSO fitness function. Copyright © 2016. Published by Elsevier B.V.

  15. Inter- and intraobserver reliability of the Rockwood classification in acute acromioclavicular joint dislocations.

    PubMed

    Schneider, M M; Balke, M; Koenen, P; Fröhlich, M; Wafaisade, A; Bouillon, B; Banerjee, M

    2016-07-01

    The reliability of the Rockwood classification, the gold standard for acute acromioclavicular (AC) joint separations, has not yet been tested. The purpose of this study was to investigate the reliability of visual and measured AC joint lesion grades according to the Rockwood classification. Four investigators (two shoulder specialists and two second-year residents) examined radiographs (bilateral panoramic stress and axial views) in 58 patients and graded the injury according to the Rockwood classification using the following sequence: (1) visual classification of the AC joint lesion, (2) digital measurement of the coracoclavicular distance (CCD) and the horizontal dislocation (HD) with Osirix Dicom Viewer (Pixmeo, Switzerland), (3) classification of the AC joint lesion according to the measurements and (4) repetition of (1) and (2) after repeated anonymization by an independent physician. Visual and measured Rockwood grades as well as the CCD and HD of every patient were documented, and a CC index was calculated (CCD injured/CCD healthy). All records were then used to evaluate intra- and interobserver reliability. The disagreement between visual and measured diagnosis ranged from 6.9 to 27.6 %. Interobserver reliability for visual diagnosis was good (0.72-0.74) and excellent (0.85-0.93) for measured Rockwood grades. Intraobserver reliability was good to excellent (0.67-0.93) for visual diagnosis and excellent for measured diagnosis (0.90-0.97). The correlations between measurements of the axial view varied from 0.68 to 0.98 (good to excellent) for interobserver reliability and from 0.90 to 0.97 (excellent) for intraobserver reliability. Bilateral panoramic stress and axial radiographs are reliable examinations for grading AC joint injuries according to Rockwood's classification. Clinicians of all experience levels can precisely classify AC joint lesions according to the Rockwood classification. We recommend to grade acute ACG lesions by performing a digital measurement instead of a sole visual diagnosis because of the higher intra- and interobserver reliability. Case series, Level IV.

  16. Sentiment topic mining based on comment tags

    NASA Astrophysics Data System (ADS)

    Zhang, Daohai; Liu, Xue; Li, Juan; Fan, Mingyue

    2018-03-01

    With the development of e-commerce, various comments based on tags are generated, how to extract valuable information from these comment tags has become an important content of business management decisions. This study takes HUAWEI mobile phone tags as an example using the sentiment analysis and topic LDA mining method. The first step is data preprocessing and classification of comment tag topic mining. And then make the sentiment classification for comment tags. Finally, mine the comments again and analyze the emotional theme distribution under different sentiment classification. The results show that HUAWEI mobile phone has a good user experience in terms of fluency, cost performance, appearance, etc. Meanwhile, it should pay more attention to independent research and development, product design and development. In addition, battery and speed performance should be enhanced.

  17. An enhanced data visualization method for diesel engine malfunction classification using multi-sensor signals.

    PubMed

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-10-21

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.

  18. An Enhanced Data Visualization Method for Diesel Engine Malfunction Classification Using Multi-Sensor Signals

    PubMed Central

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-01-01

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347

  19. Reliability of Radiographic Assessments of Adolescent Midshaft Clavicle Fractures by the FACTS Multicenter Study Group.

    PubMed

    Li, Ying; Donohue, Kyna S; Robbins, Christopher B; Pennock, Andrew T; Ellis, Henry B; Nepple, Jeffrey J; Pandya, Nirav; Spence, David D; Willimon, Samuel Clifton; Heyworth, Benton E

    2017-09-01

    There is a recent trend toward increased surgical treatment of displaced midshaft clavicle fractures in adolescents. The primary purpose of this study was to evaluate the intrarater and interrater reliability of clavicle fracture classification systems and measurements of displacement, shortening, and angulation in adolescents. The secondary purpose was to compare 2 different measurement methods for fracture shortening. This study was performed by a multicenter study group conducting a prospective, comparative, observational cohort study of adolescent clavicle fractures. Eight raters evaluated 24 deidentified anteroposterior clavicle radiographs selected from patients 10-18 years of age with midshaft clavicle fractures. Two clavicle fracture classification systems were used, and 2 measurements for shortening, 1 measurement for superior-inferior displacement, and 2 measurements for fracture angulation were performed. A minimum of 2 weeks after the first round, the process was repeated. Intraclass correlation coefficients were calculated. Good to excellent intrarater and interrater agreement was achieved for the descriptive classification system of fracture displacement, direction of angulation, presence of comminution, and all continuous variables, including both measurements of shortening, superior-inferior displacement, and degrees of angulation. Moderate agreement was achieved for the Arbeitsgemeinschaft für Osteosynthesefragen classification system overall. Mean shortening by 2 different methods were significantly different from each other (P < 0.0001). Most radiographic measurements performed by investigators in a multicenter, prospective cohort study of adolescent clavicle fractures demonstrated good-to-excellent intrarater and interrater reliability. Future consensus on the most accurate and clinically appropriate measurement method for fracture shortening is critical.

  20. Neural net classification of x-ray pistachio nut data

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Sipe, Michael A.; Schatzki, Thomas F.; Keagy, Pamela M.; Le, Lan Chau

    1996-12-01

    Classification results for agricultural products are presented using a new neural network. This neural network inherently produces higher-order decision surfaces. It achieves this with fewer hidden layer neurons than other classifiers require. This gives better generalization. It uses new techniques to select the number of hidden layer neurons and adaptive algorithms that avoid other such ad hoc parameter selection problems; it allows selection of the best classifier parameters without the need to analyze the test set results. The agriculture case study considered is the inspection and classification of pistachio nuts using x- ray imagery. Present inspection techniques cannot provide good rejection of worm damaged nuts without rejecting too many good nuts. X-ray imagery has the potential to provide 100% inspection of such agricultural products in real time. Only preliminary results are presented, but these indicate the potential to reduce major defects to 2% of the crop with 1% of good nuts rejected. Future image processing techniques that should provide better features to improve performance and allow inspection of a larger variety of nuts are noted. These techniques and variations of them have uses in a number of other agricultural product inspection problems.

  1. Branch classification: A new mechanism for improving branch predictor performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, P.Y.; Hao, E.; Patt, Y.

    There is wide agreement that one of the most significant impediments to the performance of current and future pipelined superscalar processors is the presence of conditional branches in the instruction stream. Speculative execution is one solution to the branch problem, but speculative work is discarded if a branch is mispredicted. For it to be effective, speculative work is discarded if a branch is mispredicted. For it to be effective, speculative execution requires a very accurate branch predictor; 95% accuracy is not good enough. This paper proposes branch classification, a methodology for building more accurate branch predictors. Branch classification allows anmore » individual branch instruction to be associated with the branch predictor best suited to predict its direction. Using this approach, a hybrid branch predictor can be constructed such that each component branch predictor predicts those branches for which it is best suited. To demonstrate the usefulness of branch classification, an example classification scheme is given and a new hybrid predictor is built based on this scheme which achieves a higher prediction accuracy than any branch predictor previously reported in the literature.« less

  2. Deployment and Performance of the NASA D3R During the GPM OLYMPEx Field Campaign

    NASA Technical Reports Server (NTRS)

    Chandrasekar, V.; Beauchamp, Robert M.; Chen, Haonan; Vega, Manuel; Schwaller, Mathew; Willie, Delbert; Dabrowski, Aaron; Kumar, Mohit; Petersen, Walter; Wolff, David

    2016-01-01

    The NASA D3R was successfully deployed and operated throughout the NASA OLYMPEx field campaign. A differential phase based attenuation correction technique has been implemented for D3R observations. Hydrometeor classification has been demonstrated for five distinct classes using Ku-band observations of both convection and stratiform rain. The stratiform rain hydrometeor classification is compared against LDR observations and shows good agreement in identification of mixed-phase hydrometeors in the melting layer.

  3. Comparing Features for Classification of MEG Responses to Motor Imagery

    PubMed Central

    Halme, Hanna-Leena; Parkkonen, Lauri

    2016-01-01

    Background Motor imagery (MI) with real-time neurofeedback could be a viable approach, e.g., in rehabilitation of cerebral stroke. Magnetoencephalography (MEG) noninvasively measures electric brain activity at high temporal resolution and is well-suited for recording oscillatory brain signals. MI is known to modulate 10- and 20-Hz oscillations in the somatomotor system. In order to provide accurate feedback to the subject, the most relevant MI-related features should be extracted from MEG data. In this study, we evaluated several MEG signal features for discriminating between left- and right-hand MI and between MI and rest. Methods MEG was measured from nine healthy participants imagining either left- or right-hand finger tapping according to visual cues. Data preprocessing, feature extraction and classification were performed offline. The evaluated MI-related features were power spectral density (PSD), Morlet wavelets, short-time Fourier transform (STFT), common spatial patterns (CSP), filter-bank common spatial patterns (FBCSP), spatio—spectral decomposition (SSD), and combined SSD+CSP, CSP+PSD, CSP+Morlet, and CSP+STFT. We also compared four classifiers applied to single trials using 5-fold cross-validation for evaluating the classification accuracy and its possible dependence on the classification algorithm. In addition, we estimated the inter-session left-vs-right accuracy for each subject. Results The SSD+CSP combination yielded the best accuracy in both left-vs-right (mean 73.7%) and MI-vs-rest (mean 81.3%) classification. CSP+Morlet yielded the best mean accuracy in inter-session left-vs-right classification (mean 69.1%). There were large inter-subject differences in classification accuracy, and the level of the 20-Hz suppression correlated significantly with the subjective MI-vs-rest accuracy. Selection of the classification algorithm had only a minor effect on the results. Conclusions We obtained good accuracy in sensor-level decoding of MI from single-trial MEG data. Feature extraction methods utilizing both the spatial and spectral profile of MI-related signals provided the best classification results, suggesting good performance of these methods in an online MEG neurofeedback system. PMID:27992574

  4. Detection of inter-patient left and right bundle branch block heartbeats in ECG using ensemble classifiers

    PubMed Central

    2014-01-01

    Background Left bundle branch block (LBBB) and right bundle branch block (RBBB) not only mask electrocardiogram (ECG) changes that reflect diseases but also indicate important underlying pathology. The timely detection of LBBB and RBBB is critical in the treatment of cardiac diseases. Inter-patient heartbeat classification is based on independent training and testing sets to construct and evaluate a heartbeat classification system. Therefore, a heartbeat classification system with a high performance evaluation possesses a strong predictive capability for unknown data. The aim of this study was to propose a method for inter-patient classification of heartbeats to accurately detect LBBB and RBBB from the normal beat (NORM). Methods This study proposed a heartbeat classification method through a combination of three different types of classifiers: a minimum distance classifier constructed between NORM and LBBB; a weighted linear discriminant classifier between NORM and RBBB based on Bayesian decision making using posterior probabilities; and a linear support vector machine (SVM) between LBBB and RBBB. Each classifier was used with matching features to obtain better classification performance. The final types of the test heartbeats were determined using a majority voting strategy through the combination of class labels from the three classifiers. The optimal parameters for the classifiers were selected using cross-validation on the training set. The effects of different lead configurations on the classification results were assessed, and the performance of these three classifiers was compared for the detection of each pair of heartbeat types. Results The study results showed that a two-lead configuration exhibited better classification results compared with a single-lead configuration. The construction of a classifier with good performance between each pair of heartbeat types significantly improved the heartbeat classification performance. The results showed a sensitivity of 91.4% and a positive predictive value of 37.3% for LBBB and a sensitivity of 92.8% and a positive predictive value of 88.8% for RBBB. Conclusions A multi-classifier ensemble method was proposed based on inter-patient data and demonstrated a satisfactory classification performance. This approach has the potential for application in clinical practice to distinguish LBBB and RBBB from NORM of unknown patients. PMID:24903422

  5. Detection of inter-patient left and right bundle branch block heartbeats in ECG using ensemble classifiers.

    PubMed

    Huang, Huifang; Liu, Jie; Zhu, Qiang; Wang, Ruiping; Hu, Guangshu

    2014-06-05

    Left bundle branch block (LBBB) and right bundle branch block (RBBB) not only mask electrocardiogram (ECG) changes that reflect diseases but also indicate important underlying pathology. The timely detection of LBBB and RBBB is critical in the treatment of cardiac diseases. Inter-patient heartbeat classification is based on independent training and testing sets to construct and evaluate a heartbeat classification system. Therefore, a heartbeat classification system with a high performance evaluation possesses a strong predictive capability for unknown data. The aim of this study was to propose a method for inter-patient classification of heartbeats to accurately detect LBBB and RBBB from the normal beat (NORM). This study proposed a heartbeat classification method through a combination of three different types of classifiers: a minimum distance classifier constructed between NORM and LBBB; a weighted linear discriminant classifier between NORM and RBBB based on Bayesian decision making using posterior probabilities; and a linear support vector machine (SVM) between LBBB and RBBB. Each classifier was used with matching features to obtain better classification performance. The final types of the test heartbeats were determined using a majority voting strategy through the combination of class labels from the three classifiers. The optimal parameters for the classifiers were selected using cross-validation on the training set. The effects of different lead configurations on the classification results were assessed, and the performance of these three classifiers was compared for the detection of each pair of heartbeat types. The study results showed that a two-lead configuration exhibited better classification results compared with a single-lead configuration. The construction of a classifier with good performance between each pair of heartbeat types significantly improved the heartbeat classification performance. The results showed a sensitivity of 91.4% and a positive predictive value of 37.3% for LBBB and a sensitivity of 92.8% and a positive predictive value of 88.8% for RBBB. A multi-classifier ensemble method was proposed based on inter-patient data and demonstrated a satisfactory classification performance. This approach has the potential for application in clinical practice to distinguish LBBB and RBBB from NORM of unknown patients.

  6. 19 CFR 10.303 - Originating goods.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... in General Note 3(c), HTSUS; (2) Transformed with a change in classification. The goods have been transformed by a processing which results in a change in classification and, if required, a sufficient value-content, as set forth in General Note 3(c), HTSUS; or (3) Transformed without a change in classification...

  7. 19 CFR 10.303 - Originating goods.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... in General Note 3(c), HTSUS; (2) Transformed with a change in classification. The goods have been transformed by a processing which results in a change in classification and, if required, a sufficient value-content, as set forth in General Note 3(c), HTSUS; or (3) Transformed without a change in classification...

  8. Performance of the ASAS classification criteria for axial and peripheral spondyloarthritis: a systematic literature review and meta-analysis.

    PubMed

    Sepriano, Alexandre; Rubio, Roxana; Ramiro, Sofia; Landewé, Robert; van der Heijde, Désirée

    2017-05-01

    To summarise the evidence on the performance of the Assessment of SpondyloArthritis international Society (ASAS) classification criteria for axial spondyloarthritis (axSpA) (also imaging and clinical arm separately), peripheral (p)SpA and the entire set, when tested against the rheumatologist's diagnosis ('reference standard'). A systematic literature review was performed to identify eligible studies. Raw data on SpA diagnosis and classification were extracted or, if necessary, obtained from the authors of the selected publications. A meta-analysis was performed to obtain pooled estimates for sensitivity, specificity, positive and negative likelihood ratios, by fitting random effects models. Nine papers fulfilled the inclusion criteria (N=5739 patients). The entire set of the ASAS SpA criteria yielded a high pooled sensitivity (73%) and specificity (88%). Similarly, good results were found for the axSpA criteria (sensitivity: 82%; specificity: 88%). Splitting the axSpA criteria in 'imaging arm only' and 'clinical arm only' resulted in much lower sensitivity (30% and 23% respectively), but very high specificity was retained (97% and 94% respectively). The pSpA criteria were less often tested than the axSpA criteria and showed a similarly high pooled specificity (87%) but lower sensitivity (63%). Accumulated evidence from studies with more than 5500 patients confirms the good performance of the various ASAS SpA criteria as tested against the rheumatologist's diagnosis. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  9. A Bio-Inspired Herbal Tea Flavour Assessment Technique

    PubMed Central

    Zakaria, Nur Zawatil Isqi; Masnan, Maz Jamilah; Zakaria, Ammar; Shakaff, Ali Yeon Md

    2014-01-01

    Herbal-based products are becoming a widespread production trend among manufacturers for the domestic and international markets. As the production increases to meet the market demand, it is very crucial for the manufacturer to ensure that their products have met specific criteria and fulfil the intended quality determined by the quality controller. One famous herbal-based product is herbal tea. This paper investigates bio-inspired flavour assessments in a data fusion framework involving an e-nose and e-tongue. The objectives are to attain good classification of different types and brands of herbal tea, classification of different flavour masking effects and finally classification of different concentrations of herbal tea. Two data fusion levels were employed in this research, low level data fusion and intermediate level data fusion. Four classification approaches; LDA, SVM, KNN and PNN were examined in search of the best classifier to achieve the research objectives. In order to evaluate the classifiers' performance, an error estimator based on k-fold cross validation and leave-one-out were applied. Classification based on GC-MS TIC data was also included as a comparison to the classification performance using fusion approaches. Generally, KNN outperformed the other classification techniques for the three flavour assessments in the low level data fusion and intermediate level data fusion. However, the classification results based on GC-MS TIC data are varied. PMID:25010697

  10. Radar modulation classification using time-frequency representation and nonlinear regression

    NASA Astrophysics Data System (ADS)

    De Luigi, Christophe; Arques, Pierre-Yves; Lopez, Jean-Marc; Moreau, Eric

    1999-09-01

    In naval electronic environment, pulses emitted by radars are collected by ESM receivers. For most of them the intrapulse signal is modulated by a particular law. To help the classical identification process, a classification and estimation of this modulation law is applied on the intrapulse signal measurements. To estimate with a good accuracy the time-varying frequency of a signal corrupted by an additive noise, one method has been chosen. This method consists on the Wigner distribution calculation, the instantaneous frequency is then estimated by the peak location of the distribution. Bias and variance of the estimator are performed by computed simulations. In a estimated sequence of frequencies, we assume the presence of false and good estimated ones, the hypothesis of Gaussian distribution is made on the errors. A robust non linear regression method, based on the Levenberg-Marquardt algorithm, is thus applied on these estimated frequencies using a Maximum Likelihood Estimator. The performances of the method are tested by using varied modulation laws and different signal to noise ratios.

  11. Support-vector-machine tree-based domain knowledge learning toward automated sports video classification

    NASA Astrophysics Data System (ADS)

    Xiao, Guoqiang; Jiang, Yang; Song, Gang; Jiang, Jianmin

    2010-12-01

    We propose a support-vector-machine (SVM) tree to hierarchically learn from domain knowledge represented by low-level features toward automatic classification of sports videos. The proposed SVM tree adopts a binary tree structure to exploit the nature of SVM's binary classification, where each internal node is a single SVM learning unit, and each external node represents the classified output type. Such a SVM tree presents a number of advantages, which include: 1. low computing cost; 2. integrated learning and classification while preserving individual SVM's learning strength; and 3. flexibility in both structure and learning modules, where different numbers of nodes and features can be added to address specific learning requirements, and various learning models can be added as individual nodes, such as neural networks, AdaBoost, hidden Markov models, dynamic Bayesian networks, etc. Experiments support that the proposed SVM tree achieves good performances in sports video classifications.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackenzie, Cristóbal; Pichara, Karim; Protopapas, Pavlos

    The success of automatic classification of variable stars depends strongly on the lightcurve representation. Usually, lightcurves are represented as a vector of many descriptors designed by astronomers called features. These descriptors are expensive in terms of computing, require substantial research effort to develop, and do not guarantee a good classification. Today, lightcurve representation is not entirely automatic; algorithms must be designed and manually tuned up for every survey. The amounts of data that will be generated in the future mean astronomers must develop scalable and automated analysis pipelines. In this work we present a feature learning algorithm designed for variablemore » objects. Our method works by extracting a large number of lightcurve subsequences from a given set, which are then clustered to find common local patterns in the time series. Representatives of these common patterns are then used to transform lightcurves of a labeled set into a new representation that can be used to train a classifier. The proposed algorithm learns the features from both labeled and unlabeled lightcurves, overcoming the bias using only labeled data. We test our method on data sets from the Massive Compact Halo Object survey and the Optical Gravitational Lensing Experiment; the results show that our classification performance is as good as and in some cases better than the performance achieved using traditional statistical features, while the computational cost is significantly lower. With these promising results, we believe that our method constitutes a significant step toward the automation of the lightcurve classification pipeline.« less

  13. Clustering-based Feature Learning on Variable Stars

    NASA Astrophysics Data System (ADS)

    Mackenzie, Cristóbal; Pichara, Karim; Protopapas, Pavlos

    2016-04-01

    The success of automatic classification of variable stars depends strongly on the lightcurve representation. Usually, lightcurves are represented as a vector of many descriptors designed by astronomers called features. These descriptors are expensive in terms of computing, require substantial research effort to develop, and do not guarantee a good classification. Today, lightcurve representation is not entirely automatic; algorithms must be designed and manually tuned up for every survey. The amounts of data that will be generated in the future mean astronomers must develop scalable and automated analysis pipelines. In this work we present a feature learning algorithm designed for variable objects. Our method works by extracting a large number of lightcurve subsequences from a given set, which are then clustered to find common local patterns in the time series. Representatives of these common patterns are then used to transform lightcurves of a labeled set into a new representation that can be used to train a classifier. The proposed algorithm learns the features from both labeled and unlabeled lightcurves, overcoming the bias using only labeled data. We test our method on data sets from the Massive Compact Halo Object survey and the Optical Gravitational Lensing Experiment; the results show that our classification performance is as good as and in some cases better than the performance achieved using traditional statistical features, while the computational cost is significantly lower. With these promising results, we believe that our method constitutes a significant step toward the automation of the lightcurve classification pipeline.

  14. Employing wavelet-based texture features in ammunition classification

    NASA Astrophysics Data System (ADS)

    Borzino, Ángelo M. C. R.; Maher, Robert C.; Apolinário, José A.; de Campos, Marcello L. R.

    2017-05-01

    Pattern recognition, a branch of machine learning, involves classification of information in images, sounds, and other digital representations. This paper uses pattern recognition to identify which kind of ammunition was used when a bullet was fired based on a carefully constructed set of gunshot sound recordings. To do this task, we show that texture features obtained from the wavelet transform of a component of the gunshot signal, treated as an image, and quantized in gray levels, are good ammunition discriminators. We test the technique with eight different calibers and achieve a classification rate better than 95%. We also compare the performance of the proposed method with results obtained by standard temporal and spectrographic techniques

  15. Optimal Non-Invasive Fault Classification Model for Packaged Ceramic Tile Quality Monitoring Using MMW Imaging

    NASA Astrophysics Data System (ADS)

    Agarwal, Smriti; Singh, Dharmendra

    2016-04-01

    Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.

  16. Dissimilarity representations in lung parenchyma classification

    NASA Astrophysics Data System (ADS)

    Sørensen, Lauge; de Bruijne, Marleen

    2009-02-01

    A good problem representation is important for a pattern recognition system to be successful. The traditional approach to statistical pattern recognition is feature representation. More specifically, objects are represented by a number of features in a feature vector space, and classifiers are built in this representation. This is also the general trend in lung parenchyma classification in computed tomography (CT) images, where the features often are measures on feature histograms. Instead, we propose to build normal density based classifiers in dissimilarity representations for lung parenchyma classification. This allows for the classifiers to work on dissimilarities between objects, which might be a more natural way of representing lung parenchyma. In this context, dissimilarity is defined between CT regions of interest (ROI)s. ROIs are represented by their CT attenuation histogram and ROI dissimilarity is defined as a histogram dissimilarity measure between the attenuation histograms. In this setting, the full histograms are utilized according to the chosen histogram dissimilarity measure. We apply this idea to classification of different emphysema patterns as well as normal, healthy tissue. Two dissimilarity representation approaches as well as different histogram dissimilarity measures are considered. The approaches are evaluated on a set of 168 CT ROIs using normal density based classifiers all showing good performance. Compared to using histogram dissimilarity directly as distance in a emph{k} nearest neighbor classifier, which achieves a classification accuracy of 92.9%, the best dissimilarity representation based classifier is significantly better with a classification accuracy of 97.0% (text{emph{p" border="0" class="imgtopleft"> = 0.046).

  17. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms.

    PubMed

    Bromuri, Stefano; Zufferey, Damien; Hennebert, Jean; Schumacher, Michael

    2014-10-01

    This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series. We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision. Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches. The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Stokes space modulation format classification based on non-iterative clustering algorithm for coherent optical receivers.

    PubMed

    Mai, Xiaofeng; Liu, Jie; Wu, Xiong; Zhang, Qun; Guo, Changjian; Yang, Yanfu; Li, Zhaohui

    2017-02-06

    A Stokes-space modulation format classification (MFC) technique is proposed for coherent optical receivers by using a non-iterative clustering algorithm. In the clustering algorithm, two simple parameters are calculated to help find the density peaks of the data points in Stokes space and no iteration is required. Correct MFC can be realized in numerical simulations among PM-QPSK, PM-8QAM, PM-16QAM, PM-32QAM and PM-64QAM signals within practical optical signal-to-noise ratio (OSNR) ranges. The performance of the proposed MFC algorithm is also compared with those of other schemes based on clustering algorithms. The simulation results show that good classification performance can be achieved using the proposed MFC scheme with moderate time complexity. Proof-of-concept experiments are finally implemented to demonstrate MFC among PM-QPSK/16QAM/64QAM signals, which confirm the feasibility of our proposed MFC scheme.

  19. 19 CFR 10.918 - De minimis.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... section, a good that does not undergo a change in tariff classification pursuant to General Note 32(n), HTSUS, is an originating good if: (1) The value of all non-originating materials used in the production of the good that do not undergo the applicable change in tariff classification does not exceed 10...

  20. 19 CFR 10.3018 - De minimis.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... this section, a good that does not undergo a change in tariff classification pursuant to General Note 34, HTSUS, is an originating good if: (1) The value of all non-originating materials used in the production of the good that do not undergo the applicable change in tariff classification does not exceed 10...

  1. 19 CFR 10.2018 - De minimis.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... section, a good that does not undergo a change in tariff classification pursuant to General Note 35, HTSUS, is an originating good if: (1) The value of all non-originating materials used in the production of the good that do not undergo the applicable change in tariff classification does not exceed 10 percent...

  2. 19 CFR 10.1018 - De minimis.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... section, a good that does not undergo a change in tariff classification pursuant to General Note 33, HTSUS, is an originating good if: (1) The value of all non-originating materials used in the production of the good that do not undergo the applicable change in tariff classification does not exceed 10 percent...

  3. 19 CFR 10.1018 - De minimis.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... section, a good that does not undergo a change in tariff classification pursuant to General Note 33, HTSUS, is an originating good if: (1) The value of all non-originating materials used in the production of the good that do not undergo the applicable change in tariff classification does not exceed 10 percent...

  4. 19 CFR 10.918 - De minimis.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... section, a good that does not undergo a change in tariff classification pursuant to General Note 32(n), HTSUS, is an originating good if: (1) The value of all non-originating materials used in the production of the good that do not undergo the applicable change in tariff classification does not exceed 10...

  5. 19 CFR 10.918 - De minimis.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... section, a good that does not undergo a change in tariff classification pursuant to General Note 32(n), HTSUS, is an originating good if: (1) The value of all non-originating materials used in the production of the good that do not undergo the applicable change in tariff classification does not exceed 10...

  6. 19 CFR 10.1018 - De minimis.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... section, a good that does not undergo a change in tariff classification pursuant to General Note 33, HTSUS, is an originating good if: (1) The value of all non-originating materials used in the production of the good that do not undergo the applicable change in tariff classification does not exceed 10 percent...

  7. 19 CFR 10.3018 - De minimis.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... this section, a good that does not undergo a change in tariff classification pursuant to General Note 34, HTSUS, is an originating good if: (1) The value of all non-originating materials used in the production of the good that do not undergo the applicable change in tariff classification does not exceed 10...

  8. Multiclass Classification for the Differential Diagnosis on the ADHD Subtypes Using Recursive Feature Elimination and Hierarchical Extreme Learning Machine: Structural MRI Study

    PubMed Central

    Qureshi, Muhammad Naveed Iqbal; Min, Beomjun; Jo, Hang Joon; Lee, Boreom

    2016-01-01

    The classification of neuroimaging data for the diagnosis of certain brain diseases is one of the main research goals of the neuroscience and clinical communities. In this study, we performed multiclass classification using a hierarchical extreme learning machine (H-ELM) classifier. We compared the performance of this classifier with that of a support vector machine (SVM) and basic extreme learning machine (ELM) for cortical MRI data from attention deficit/hyperactivity disorder (ADHD) patients. We used 159 structural MRI images of children from the publicly available ADHD-200 MRI dataset. The data consisted of three types, namely, typically developing (TDC), ADHD-inattentive (ADHD-I), and ADHD-combined (ADHD-C). We carried out feature selection by using standard SVM-based recursive feature elimination (RFE-SVM) that enabled us to achieve good classification accuracy (60.78%). In this study, we found the RFE-SVM feature selection approach in combination with H-ELM to effectively enable the acquisition of high multiclass classification accuracy rates for structural neuroimaging data. In addition, we found that the most important features for classification were the surface area of the superior frontal lobe, and the cortical thickness, volume, and mean surface area of the whole cortex. PMID:27500640

  9. Multiclass Classification for the Differential Diagnosis on the ADHD Subtypes Using Recursive Feature Elimination and Hierarchical Extreme Learning Machine: Structural MRI Study.

    PubMed

    Qureshi, Muhammad Naveed Iqbal; Min, Beomjun; Jo, Hang Joon; Lee, Boreom

    2016-01-01

    The classification of neuroimaging data for the diagnosis of certain brain diseases is one of the main research goals of the neuroscience and clinical communities. In this study, we performed multiclass classification using a hierarchical extreme learning machine (H-ELM) classifier. We compared the performance of this classifier with that of a support vector machine (SVM) and basic extreme learning machine (ELM) for cortical MRI data from attention deficit/hyperactivity disorder (ADHD) patients. We used 159 structural MRI images of children from the publicly available ADHD-200 MRI dataset. The data consisted of three types, namely, typically developing (TDC), ADHD-inattentive (ADHD-I), and ADHD-combined (ADHD-C). We carried out feature selection by using standard SVM-based recursive feature elimination (RFE-SVM) that enabled us to achieve good classification accuracy (60.78%). In this study, we found the RFE-SVM feature selection approach in combination with H-ELM to effectively enable the acquisition of high multiclass classification accuracy rates for structural neuroimaging data. In addition, we found that the most important features for classification were the surface area of the superior frontal lobe, and the cortical thickness, volume, and mean surface area of the whole cortex.

  10. Automatic parquet block sorting using real-time spectral classification

    NASA Astrophysics Data System (ADS)

    Astrom, Anders; Astrand, Erik; Johansson, Magnus

    1999-03-01

    This paper presents a real-time spectral classification system based on the PGP spectrograph and a smart image sensor. The PGP is a spectrograph which extracts the spectral information from a scene and projects the information on an image sensor, which is a method often referred to as Imaging Spectroscopy. The classification is based on linear models and categorizes a number of pixels along a line. Previous systems adopting this method have used standard sensors, which often resulted in poor performance. The new system, however, is based on a patented near-sensor classification method, which exploits analogue features on the smart image sensor. The method reduces the enormous amount of data to be processed at an early stage, thus making true real-time spectral classification possible. The system has been evaluated on hardwood parquet boards showing very good results. The color defects considered in the experiments were blue stain, white sapwood, yellow decay and red decay. In addition to these four defect classes, a reference class was used to indicate correct surface color. The system calculates a statistical measure for each parquet block, giving the pixel defect percentage. The patented method makes it possible to run at very high speeds with a high spectral discrimination ability. Using a powerful illuminator, the system can run with a line frequency exceeding 2000 line/s. This opens up the possibility to maintain high production speed and still measure with good resolution.

  11. Ensemble Sparse Classification of Alzheimer’s Disease

    PubMed Central

    Liu, Manhua; Zhang, Daoqiang; Shen, Dinggang

    2012-01-01

    The high-dimensional pattern classification methods, e.g., support vector machines (SVM), have been widely investigated for analysis of structural and functional brain images (such as magnetic resonance imaging (MRI)) to assist the diagnosis of Alzheimer’s disease (AD) including its prodromal stage, i.e., mild cognitive impairment (MCI). Most existing classification methods extract features from neuroimaging data and then construct a single classifier to perform classification. However, due to noise and small sample size of neuroimaging data, it is challenging to train only a global classifier that can be robust enough to achieve good classification performance. In this paper, instead of building a single global classifier, we propose a local patch-based subspace ensemble method which builds multiple individual classifiers based on different subsets of local patches and then combines them for more accurate and robust classification. Specifically, to capture the local spatial consistency, each brain image is partitioned into a number of local patches and a subset of patches is randomly selected from the patch pool to build a weak classifier. Here, the sparse representation-based classification (SRC) method, which has shown effective for classification of image data (e.g., face), is used to construct each weak classifier. Then, multiple weak classifiers are combined to make the final decision. We evaluate our method on 652 subjects (including 198 AD patients, 225 MCI and 229 normal controls) from Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using MR images. The experimental results show that our method achieves an accuracy of 90.8% and an area under the ROC curve (AUC) of 94.86% for AD classification and an accuracy of 87.85% and an AUC of 92.90% for MCI classification, respectively, demonstrating a very promising performance of our method compared with the state-of-the-art methods for AD/MCI classification using MR images. PMID:22270352

  12. On the Implementation of a Land Cover Classification System for SAR Images Using Khoros

    NASA Technical Reports Server (NTRS)

    Medina Revera, Edwin J.; Espinosa, Ramon Vasquez

    1997-01-01

    The Synthetic Aperture Radar (SAR) sensor is widely used to record data about the ground under all atmospheric conditions. The SAR acquired images have very good resolution which necessitates the development of a classification system that process the SAR images to extract useful information for different applications. In this work, a complete system for the land cover classification was designed and programmed using the Khoros, a data flow visual language environment, taking full advantages of the polymorphic data services that it provides. Image analysis was applied to SAR images to improve and automate the processes of recognition and classification of the different regions like mountains and lakes. Both unsupervised and supervised classification utilities were used. The unsupervised classification routines included the use of several Classification/Clustering algorithms like the K-means, ISO2, Weighted Minimum Distance, and the Localized Receptive Field (LRF) training/classifier. Different texture analysis approaches such as Invariant Moments, Fractal Dimension and Second Order statistics were implemented for supervised classification of the images. The results and conclusions for SAR image classification using the various unsupervised and supervised procedures are presented based on their accuracy and performance.

  13. Post-operative rotator cuff integrity, based on Sugaya's classification, can reflect abduction muscle strength of the shoulder.

    PubMed

    Yoshida, Masahito; Collin, Phillipe; Josseaume, Thierry; Lädermann, Alexandre; Goto, Hideyuki; Sugimoto, Katumasa; Otsuka, Takanobu

    2018-01-01

    Magnetic resonance (MR) imaging is common in structural and qualitative assessment of the rotator cuff post-operatively. Rotator cuff integrity has been thought to be associated with clinical outcome. The purpose of this study was to evaluate the inter-observer reliability of cuff integrity (Sugaya's classification) and assess the correlation between Sugaya's classification and the clinical outcome. It was hypothesized that Sugaya's classification would show good reliability and good correlation with the clinical outcome. Post-operative MR images were taken two years post-operatively, following arthroscopic rotator cuff repair. For assessment of inter-rater reliability, all radiographic evaluations for the supraspinatus muscle were done by two orthopaedic surgeons and one radiologist. Rotator cuff integrity was classified into five categories, according to Sugaya's classification. Fatty infiltration was graded into four categories, based on the Fuchs' classification grading system. Muscle hypotrophy was graded as four grades, according to the scale proposed by Warner. The clinical outcome was assessed according to the constant scoring system pre-operatively and 2 years post-operatively. Of the sixty-two consecutive patients with full-thickness rotator cuff tears, fifty-two patients were reviewed in this study. These subjects included twenty-three men and twenty-nine women, with an average age of fifty-seven years. In terms of the inter-rater reliability between orthopaedic surgeons, Sugaya's classification showed the highest agreement [ICC (2.1) = 0.82] for rotator cuff integrity. The grade of fatty infiltration and muscle atrophy demonstrated good agreement, respectively (0.722 and 0.758). With regard to the inter-rater reliability between orthopaedic surgeon and radiologist, Sugaya's classification showed good reliability [ICC (2.1) = 0.70]. On the other hand, fatty infiltration and muscle hypotrophy classifications demonstrated fair and moderate agreement [ICC (2.1) = 0.39 and 0.49]. Although no significant correlation was found between overall post-operative constant score and Sugaya's classification, Sugaya's classification indicated significant correlation with the muscle strength score. Sugaya's classification showed repeatability and good agreement between the orthopaedist and radiologist, who are involved in the patient care for the rotator cuff tear. Common classification of rotator cuff integrity with good reliability will give appropriate information for clinicians to improve the patient care of the rotator cuff tear. This classification also would be helpful to predict the strength of arm abduction in the scapular plane. IV.

  14. Spatial-temporal discriminant analysis for ERP-based brain-computer interface.

    PubMed

    Zhang, Yu; Zhou, Guoxu; Zhao, Qibin; Jin, Jing; Wang, Xingyu; Cichocki, Andrzej

    2013-03-01

    Linear discriminant analysis (LDA) has been widely adopted to classify event-related potential (ERP) in brain-computer interface (BCI). Good classification performance of the ERP-based BCI usually requires sufficient data recordings for effective training of the LDA classifier, and hence a long system calibration time which however may depress the system practicability and cause the users resistance to the BCI system. In this study, we introduce a spatial-temporal discriminant analysis (STDA) to ERP classification. As a multiway extension of the LDA, the STDA method tries to maximize the discriminant information between target and nontarget classes through finding two projection matrices from spatial and temporal dimensions collaboratively, which reduces effectively the feature dimensionality in the discriminant analysis, and hence decreases significantly the number of required training samples. The proposed STDA method was validated with dataset II of the BCI Competition III and dataset recorded from our own experiments, and compared to the state-of-the-art algorithms for ERP classification. Online experiments were additionally implemented for the validation. The superior classification performance in using few training samples shows that the STDA is effective to reduce the system calibration time and improve the classification accuracy, thereby enhancing the practicability of ERP-based BCI.

  15. Good Health: The Power of Power

    ERIC Educational Resources Information Center

    Corbin, Charles B.; Janz, Kathleen F.; Baptista, Fátima

    2017-01-01

    Power has long been considered to be a skill-related fitness component. However, based on recent evidence, a strong case can be made for the classification of power as a health-related fitness component. Additionally, the evidence indicates that performing physical activities that build power is associated with the healthy development of bones…

  16. Comparison of Hybrid Classifiers for Crop Classification Using Normalized Difference Vegetation Index Time Series: A Case Study for Major Crops in North Xinjiang, China

    PubMed Central

    Hao, Pengyu; Wang, Li; Niu, Zheng

    2015-01-01

    A range of single classifiers have been proposed to classify crop types using time series vegetation indices, and hybrid classifiers are used to improve discriminatory power. Traditional fusion rules use the product of multi-single classifiers, but that strategy cannot integrate the classification output of machine learning classifiers. In this research, the performance of two hybrid strategies, multiple voting (M-voting) and probabilistic fusion (P-fusion), for crop classification using NDVI time series were tested with different training sample sizes at both pixel and object levels, and two representative counties in north Xinjiang were selected as study area. The single classifiers employed in this research included Random Forest (RF), Support Vector Machine (SVM), and See 5 (C 5.0). The results indicated that classification performance improved (increased the mean overall accuracy by 5%~10%, and reduced standard deviation of overall accuracy by around 1%) substantially with the training sample number, and when the training sample size was small (50 or 100 training samples), hybrid classifiers substantially outperformed single classifiers with higher mean overall accuracy (1%~2%). However, when abundant training samples (4,000) were employed, single classifiers could achieve good classification accuracy, and all classifiers obtained similar performances. Additionally, although object-based classification did not improve accuracy, it resulted in greater visual appeal, especially in study areas with a heterogeneous cropping pattern. PMID:26360597

  17. Inferring Human Activity Recognition with Ambient Sound on Wireless Sensor Nodes.

    PubMed

    Salomons, Etto L; Havinga, Paul J M; van Leeuwen, Henk

    2016-09-27

    A wireless sensor network that consists of nodes with a sound sensor can be used to obtain context awareness in home environments. However, the limited processing power of wireless nodes offers a challenge when extracting features from the signal, and subsequently, classifying the source. Although multiple papers can be found on different methods of sound classification, none of these are aimed at limited hardware or take the efficiency of the algorithms into account. In this paper, we compare and evaluate several classification methods on a real sensor platform using different feature types and classifiers, in order to find an approach that results in a good classifier that can run on limited hardware. To be as realistic as possible, we trained our classifiers using sound waves from many different sources. We conclude that despite the fact that the classifiers are often of low quality due to the highly restricted hardware resources, sufficient performance can be achieved when (1) the window length for our classifiers is increased, and (2) if we apply a two-step approach that uses a refined classification after a global classification has been performed.

  18. National Program for Inspection of Non-Federal Dams. Stevens Paper Company (Lower) Dam (MA 00074), Connecticut River Basin, Westfield, Massachusetts. Phase I Inspection Report.

    DTIC Science & Technology

    1979-03-01

    showed the dam to be in good c~rndition. The dam has a size classification of intermediate and a hazard classification of low. The test flood is the ti... good condition. However, water passing over the spillway limited the inspection of the spillway. The dam has a size classification of intermediate...hydrologic and hydraulic assumptions. The dam is generally in good condition. However, it is recommended that the owner repair the drawdown outlet, and

  19. StandFood: Standardization of Foods Using a Semi-Automatic System for Classifying and Describing Foods According to FoodEx2.

    PubMed

    Eftimov, Tome; Korošec, Peter; Koroušić Seljak, Barbara

    2017-05-26

    The European Food Safety Authority has developed a standardized food classification and description system called FoodEx2. It uses facets to describe food properties and aspects from various perspectives, making it easier to compare food consumption data from different sources and perform more detailed data analyses. However, both food composition data and food consumption data, which need to be linked, are lacking in FoodEx2 because the process of classification and description has to be manually performed-a process that is laborious and requires good knowledge of the system and also good knowledge of food (composition, processing, marketing, etc.). In this paper, we introduce a semi-automatic system for classifying and describing foods according to FoodEx2, which consists of three parts. The first involves a machine learning approach and classifies foods into four FoodEx2 categories, with two for single foods: raw (r) and derivatives (d), and two for composite foods: simple (s) and aggregated (c). The second uses a natural language processing approach and probability theory to describe foods. The third combines the result from the first and the second part by defining post-processing rules in order to improve the result for the classification part. We tested the system using a set of food items (from Slovenia) manually-coded according to FoodEx2. The new semi-automatic system obtained an accuracy of 89% for the classification part and 79% for the description part, or an overall result of 79% for the whole system.

  20. Developing and validating the Communication Function Classification System for individuals with cerebral palsy

    PubMed Central

    HIDECKER, MARY JO COOLEY; PANETH, NIGEL; ROSENBAUM, PETER L; KENT, RAYMOND D; LILLIE, JANET; EULENBERG, JOHN B; CHESTER, KEN; JOHNSON, BRENDA; MICHALSEN, LAUREN; EVATT, MORGAN; TAYLOR, KARA

    2011-01-01

    Aim The purpose of this study was to create and validate a Communication Function Classification System (CFCS) for children with cerebral palsy (CP) that can be used by a wide variety of individuals who are interested in CP. This paper reports the content validity, interrater reliability, and test–retest reliability of the CFCS for children with CP. Method An 11-member development team created comprehensive descriptions of the CFCS levels, and four nominal groups comprising 27 participants critiqued these levels. Within a Delphi survey, 112 participants commented on the clarity and usefulness of the CFCS. Interrater reliability was completed by 61 professionals and 68 parents/relatives who classified 69 children with CP aged 2 to 18 years. Test–retest reliability was completed by 48 professionals who allowed at least 2 weeks between classifications. The participants who assessed the CFCS were all relevant stakeholders: adults with CP, parents of children with CP, educators, occupational therapists, physical therapists, physicians, and speech–language pathologists. Results The interrater reliability of the CFCS was 0.66 between two professionals and 0.49 between a parent and a professional. Professional interrater reliability improved to 0.77 for classification of children older than 4 years. The test–retest reliability was 0.82. Interpretation The CFCS demonstrates content validity and shows very good test–retest reliability, good professional interrater reliability, and moderate parent–professional interrater reliability. Combining the CFCS with the Gross Motor Function Classification System and the Manual Ability Classification System contributes to a functional performance view of daily life for individuals with CP, in accordance with the World Health Organization’s International Classification of Functioning, Disability and Health. PMID:21707596

  1. LBP and SIFT based facial expression recognition

    NASA Astrophysics Data System (ADS)

    Sumer, Omer; Gunes, Ece O.

    2015-02-01

    This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.

  2. Classification of single-trial auditory events using dry-wireless EEG during real and motion simulated flight.

    PubMed

    Callan, Daniel E; Durantin, Gautier; Terzibas, Cengiz

    2015-01-01

    Application of neuro-augmentation technology based on dry-wireless EEG may be considerably beneficial for aviation and space operations because of the inherent dangers involved. In this study we evaluate classification performance of perceptual events using a dry-wireless EEG system during motion platform based flight simulation and actual flight in an open cockpit biplane to determine if the system can be used in the presence of considerable environmental and physiological artifacts. A passive task involving 200 random auditory presentations of a chirp sound was used for evaluation. The advantage of this auditory task is that it does not interfere with the perceptual motor processes involved with piloting the plane. Classification was based on identifying the presentation of a chirp sound vs. silent periods. Evaluation of Independent component analysis (ICA) and Kalman filtering to enhance classification performance by extracting brain activity related to the auditory event from other non-task related brain activity and artifacts was assessed. The results of permutation testing revealed that single trial classification of presence or absence of an auditory event was significantly above chance for all conditions on a novel test set. The best performance could be achieved with both ICA and Kalman filtering relative to no processing: Platform Off (83.4% vs. 78.3%), Platform On (73.1% vs. 71.6%), Biplane Engine Off (81.1% vs. 77.4%), and Biplane Engine On (79.2% vs. 66.1%). This experiment demonstrates that dry-wireless EEG can be used in environments with considerable vibration, wind, acoustic noise, and physiological artifacts and achieve good single trial classification performance that is necessary for future successful application of neuro-augmentation technology based on brain-machine interfaces.

  3. Classification Model for Damage Localization in a Plate Structure

    NASA Astrophysics Data System (ADS)

    Janeliukstis, R.; Ruchevskis, S.; Chate, A.

    2018-01-01

    The present study is devoted to the problem of damage localization by means of data classification. The commercial ANSYS finite-elements program was used to make a model of a cantilevered composite plate equipped with numerous strain sensors. The plate was divided into zones, and, for data classification purposes, each of them housed several points to which a point mass of magnitude 5 and 10% of plate mass was applied. At each of these points, a numerical modal analysis was performed, from which the first few natural frequencies and strain readings were extracted. The strain data for every point were the input for a classification procedure involving k nearest neighbors and decision trees. The classification model was trained and optimized by finetuning the key parameters of both algorithms. Finally, two new query points were simulated and subjected to a classification in terms of assigning a label to one of the zones of the plate, thus localizing these points. Damage localization results were compared for both algorithms and were found to be in good agreement with the actual application positions of point load.

  4. Automatic Classification of Medical Text: The Influence of Publication Form1

    PubMed Central

    Cole, William G.; Michael, Patricia A.; Stewart, James G.; Blois, Marsden S.

    1988-01-01

    Previous research has shown that within the domain of medical journal abstracts the statistical distribution of words is neither random nor uniform, but is highly characteristic. Many words are used mainly or solely by one medical specialty or when writing about one particular level of description. Due to this regularity of usage, automatic classification within journal abstracts has proved quite successful. The present research asks two further questions. It investigates whether this statistical regularity and automatic classification success can also be achieved in medical textbook chapters. It then goes on to see whether the statistical distribution found in textbooks is sufficiently similar to that found in abstracts to permit accurate classification of abstracts based solely on previous knowledge of textbooks. 14 textbook chapters and 45 MEDLINE abstracts were submitted to an automatic classification program that had been trained only on chapters drawn from a standard textbook series. Statistical analysis of the properties of abstracts vs. chapters revealed important differences in word use. Automatic classification performance was good for chapters, but poor for abstracts.

  5. Exploration of Force Myography and surface Electromyography in hand gesture classification.

    PubMed

    Jiang, Xianta; Merhi, Lukas-Karim; Xiao, Zhen Gang; Menon, Carlo

    2017-03-01

    Whereas pressure sensors increasingly have received attention as a non-invasive interface for hand gesture recognition, their performance has not been comprehensively evaluated. This work examined the performance of hand gesture classification using Force Myography (FMG) and surface Electromyography (sEMG) technologies by performing 3 sets of 48 hand gestures using a prototyped FMG band and an array of commercial sEMG sensors worn both on the wrist and forearm simultaneously. The results show that the FMG band achieved classification accuracies as good as the high quality, commercially available, sEMG system on both wrist and forearm positions; specifically, by only using 8 Force Sensitive Resisters (FSRs), the FMG band achieved accuracies of 91.2% and 83.5% in classifying the 48 hand gestures in cross-validation and cross-trial evaluations, which were higher than those of sEMG (84.6% and 79.1%). By using all 16 FSRs on the band, our device achieved high accuracies of 96.7% and 89.4% in cross-validation and cross-trial evaluations. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning

    PubMed Central

    Gönen, Mehmet

    2014-01-01

    Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F1, and micro F1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks. PMID:24532862

  7. Coupled dimensionality reduction and classification for supervised and semi-supervised multilabel learning.

    PubMed

    Gönen, Mehmet

    2014-03-01

    Coupled training of dimensionality reduction and classification is proposed previously to improve the prediction performance for single-label problems. Following this line of research, in this paper, we first introduce a novel Bayesian method that combines linear dimensionality reduction with linear binary classification for supervised multilabel learning and present a deterministic variational approximation algorithm to learn the proposed probabilistic model. We then extend the proposed method to find intrinsic dimensionality of the projected subspace using automatic relevance determination and to handle semi-supervised learning using a low-density assumption. We perform supervised learning experiments on four benchmark multilabel learning data sets by comparing our method with baseline linear dimensionality reduction algorithms. These experiments show that the proposed approach achieves good performance values in terms of hamming loss, average AUC, macro F 1 , and micro F 1 on held-out test data. The low-dimensional embeddings obtained by our method are also very useful for exploratory data analysis. We also show the effectiveness of our approach in finding intrinsic subspace dimensionality and semi-supervised learning tasks.

  8. Implementation of several mathematical algorithms to breast tissue density classification

    NASA Astrophysics Data System (ADS)

    Quintana, C.; Redondo, M.; Tirao, G.

    2014-02-01

    The accuracy of mammographic abnormality detection methods is strongly dependent on breast tissue characteristics, where a dense breast tissue can hide lesions causing cancer to be detected at later stages. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. This paper presents the implementation and the performance of different mathematical algorithms designed to standardize the categorization of mammographic images, according to the American College of Radiology classifications. These mathematical techniques are based on intrinsic properties calculations and on comparison with an ideal homogeneous image (joint entropy, mutual information, normalized cross correlation and index Q) as categorization parameters. The algorithms evaluation was performed on 100 cases of the mammographic data sets provided by the Ministerio de Salud de la Provincia de Córdoba, Argentina—Programa de Prevención del Cáncer de Mama (Department of Public Health, Córdoba, Argentina, Breast Cancer Prevention Program). The obtained breast classifications were compared with the expert medical diagnostics, showing a good performance. The implemented algorithms revealed a high potentiality to classify breasts into tissue density categories.

  9. Photometric classification of type Ia supernovae in the SuperNova Legacy Survey with supervised learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Möller, A.; Ruhlmann-Kleider, V.; Leloup, C.

    In the era of large astronomical surveys, photometric classification of supernovae (SNe) has become an important research field due to limited spectroscopic resources for candidate follow-up and classification. In this work, we present a method to photometrically classify type Ia supernovae based on machine learning with redshifts that are derived from the SN light-curves. This method is implemented on real data from the SNLS deferred pipeline, a purely photometric pipeline that identifies SNe Ia at high-redshifts (0.2 < z < 1.1). Our method consists of two stages: feature extraction (obtaining the SN redshift from photometry and estimating light-curve shape parameters)more » and machine learning classification. We study the performance of different algorithms such as Random Forest and Boosted Decision Trees. We evaluate the performance using SN simulations and real data from the first 3 years of the Supernova Legacy Survey (SNLS), which contains large spectroscopically and photometrically classified type Ia samples. Using the Area Under the Curve (AUC) metric, where perfect classification is given by 1, we find that our best-performing classifier (Extreme Gradient Boosting Decision Tree) has an AUC of 0.98.We show that it is possible to obtain a large photometrically selected type Ia SN sample with an estimated contamination of less than 5%. When applied to data from the first three years of SNLS, we obtain 529 events. We investigate the differences between classifying simulated SNe, and real SN survey data. In particular, we find that applying a thorough set of selection cuts to the SN sample is essential for good classification. This work demonstrates for the first time the feasibility of machine learning classification in a high- z SN survey with application to real SN data.« less

  10. Plus Disease in Retinopathy of Prematurity: Improving Diagnosis by Ranking Disease Severity and Using Quantitative Image Analysis.

    PubMed

    Kalpathy-Cramer, Jayashree; Campbell, J Peter; Erdogmus, Deniz; Tian, Peng; Kedarisetti, Dharanish; Moleta, Chace; Reynolds, James D; Hutcheson, Kelly; Shapiro, Michael J; Repka, Michael X; Ferrone, Philip; Drenser, Kimberly; Horowitz, Jason; Sonmez, Kemal; Swan, Ryan; Ostmo, Susan; Jonas, Karyn E; Chan, R V Paul; Chiang, Michael F

    2016-11-01

    To determine expert agreement on relative retinopathy of prematurity (ROP) disease severity and whether computer-based image analysis can model relative disease severity, and to propose consideration of a more continuous severity score for ROP. We developed 2 databases of clinical images of varying disease severity (100 images and 34 images) as part of the Imaging and Informatics in ROP (i-ROP) cohort study and recruited expert physician, nonexpert physician, and nonphysician graders to classify and perform pairwise comparisons on both databases. Six participating expert ROP clinician-scientists, each with a minimum of 10 years of clinical ROP experience and 5 ROP publications, and 5 image graders (3 physicians and 2 nonphysician graders) who analyzed images that were obtained during routine ROP screening in neonatal intensive care units. Images in both databases were ranked by average disease classification (classification ranking), by pairwise comparison using the Elo rating method (comparison ranking), and by correlation with the i-ROP computer-based image analysis system. Interexpert agreement (weighted κ statistic) compared with the correlation coefficient (CC) between experts on pairwise comparisons and correlation between expert rankings and computer-based image analysis modeling. There was variable interexpert agreement on diagnostic classification of disease (plus, preplus, or normal) among the 6 experts (mean weighted κ, 0.27; range, 0.06-0.63), but good correlation between experts on comparison ranking of disease severity (mean CC, 0.84; range, 0.74-0.93) on the set of 34 images. Comparison ranking provided a severity ranking that was in good agreement with ranking obtained by classification ranking (CC, 0.92). Comparison ranking on the larger dataset by both expert and nonexpert graders demonstrated good correlation (mean CC, 0.97; range, 0.95-0.98). The i-ROP system was able to model this continuous severity with good correlation (CC, 0.86). Experts diagnose plus disease on a continuum, with poor absolute agreement on classification but good relative agreement on disease severity. These results suggest that the use of pairwise rankings and a continuous severity score, such as that provided by the i-ROP system, may improve agreement on disease severity in the future. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  11. Discrimination of crop types with TerraSAR-X-derived information

    NASA Astrophysics Data System (ADS)

    Sonobe, Rei; Tani, Hiroshi; Wang, Xiufeng; Kobayashi, Nobuyuki; Shimamura, Hideki

    Although classification maps are required for management and for the estimation of agricultural disaster compensation, those techniques have yet to be established. This paper describes the comparison of three different classification algorithms for mapping crops in Hokkaido, Japan, using TerraSAR-X (including TanDEM-X) dual-polarimetric data. In the study area, beans, beets, grasslands, maize, potatoes and winter wheat were cultivated. In this study, classification using TerraSAR-X-derived information was performed. Coherence values, polarimetric parameters and gamma nought values were also obtained and evaluated regarding their usefulness in crop classification. Accurate classification may be possible with currently existing supervised learning models. A comparison between the classification and regression tree (CART), support vector machine (SVM) and random forests (RF) algorithms was performed. Even though J-M distances were lower than 1.0 on all TerraSAR-X acquisition days, good results were achieved (e.g., separability between winter wheat and grass) due to the characteristics of the machine learning algorithm. It was found that SVM performed best, achieving an overall accuracy of 95.0% based on the polarimetric parameters and gamma nought values for HH and VV polarizations. The misclassified fields were less than 100 a in area and 79.5-96.3% were less than 200 a with the exception of grassland. When some feature such as a road or windbreak forest is present in the TerraSAR-X data, the ratio of its extent to that of the field is relatively higher for the smaller fields, which leads to misclassifications.

  12. a Two-Step Classification Approach to Distinguishing Similar Objects in Mobile LIDAR Point Clouds

    NASA Astrophysics Data System (ADS)

    He, H.; Khoshelham, K.; Fraser, C.

    2017-09-01

    Nowadays, lidar is widely used in cultural heritage documentation, urban modeling, and driverless car technology for its fast and accurate 3D scanning ability. However, full exploitation of the potential of point cloud data for efficient and automatic object recognition remains elusive. Recently, feature-based methods have become very popular in object recognition on account of their good performance in capturing object details. Compared with global features describing the whole shape of the object, local features recording the fractional details are more discriminative and are applicable for object classes with considerable similarity. In this paper, we propose a two-step classification approach based on point feature histograms and the bag-of-features method for automatic recognition of similar objects in mobile lidar point clouds. Lamp post, street light and traffic sign are grouped as one category in the first-step classification for their inter similarity compared with tree and vehicle. A finer classification of the lamp post, street light and traffic sign based on the result of the first-step classification is implemented in the second step. The proposed two-step classification approach is shown to yield a considerable improvement over the conventional one-step classification approach.

  13. A Neural Network-Based Gait Phase Classification Method Using Sensors Equipped on Lower Limb Exoskeleton Robots

    PubMed Central

    Jung, Jun-Young; Heo, Wonho; Yang, Hyundae; Park, Hyunsub

    2015-01-01

    An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases. PMID:26528986

  14. A Neural Network-Based Gait Phase Classification Method Using Sensors Equipped on Lower Limb Exoskeleton Robots.

    PubMed

    Jung, Jun-Young; Heo, Wonho; Yang, Hyundae; Park, Hyunsub

    2015-10-30

    An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases.

  15. Audio Classification in Speech and Music: A Comparison between a Statistical and a Neural Approach

    NASA Astrophysics Data System (ADS)

    Bugatti, Alessandro; Flammini, Alessandra; Migliorati, Pierangelo

    2002-12-01

    We focus the attention on the problem of audio classification in speech and music for multimedia applications. In particular, we present a comparison between two different techniques for speech/music discrimination. The first method is based on Zero crossing rate and Bayesian classification. It is very simple from a computational point of view, and gives good results in case of pure music or speech. The simulation results show that some performance degradation arises when the music segment contains also some speech superimposed on music, or strong rhythmic components. To overcome these problems, we propose a second method, that uses more features, and is based on neural networks (specifically a multi-layer Perceptron). In this case we obtain better performance, at the expense of a limited growth in the computational complexity. In practice, the proposed neural network is simple to be implemented if a suitable polynomial is used as the activation function, and a real-time implementation is possible even if low-cost embedded systems are used.

  16. Granular support vector machines with association rules mining for protein homology prediction.

    PubMed

    Tang, Yuchun; Jin, Bo; Zhang, Yan-Qing

    2005-01-01

    Protein homology prediction between protein sequences is one of critical problems in computational biology. Such a complex classification problem is common in medical or biological information processing applications. How to build a model with superior generalization capability from training samples is an essential issue for mining knowledge to accurately predict/classify unseen new samples and to effectively support human experts to make correct decisions. A new learning model called granular support vector machines (GSVM) is proposed based on our previous work. GSVM systematically and formally combines the principles from statistical learning theory and granular computing theory and thus provides an interesting new mechanism to address complex classification problems. It works by building a sequence of information granules and then building support vector machines (SVM) in some of these information granules on demand. A good granulation method to find suitable granules is crucial for modeling a GSVM with good performance. In this paper, we also propose an association rules-based granulation method. For the granules induced by association rules with high enough confidence and significant support, we leave them as they are because of their high "purity" and significant effect on simplifying the classification task. For every other granule, a SVM is modeled to discriminate the corresponding data. In this way, a complex classification problem is divided into multiple smaller problems so that the learning task is simplified. The proposed algorithm, here named GSVM-AR, is compared with SVM by KDDCUP04 protein homology prediction data. The experimental results show that finding the splitting hyperplane is not a trivial task (we should be careful to select the association rules to avoid overfitting) and GSVM-AR does show significant improvement compared to building one single SVM in the whole feature space. Another advantage is that the utility of GSVM-AR is very good because it is easy to be implemented. More importantly and more interestingly, GSVM provides a new mechanism to address complex classification problems.

  17. Classification as clustering: a Pareto cooperative-competitive GP approach.

    PubMed

    McIntyre, Andrew R; Heywood, Malcolm I

    2011-01-01

    Intuitively population based algorithms such as genetic programming provide a natural environment for supporting solutions that learn to decompose the overall task between multiple individuals, or a team. This work presents a framework for evolving teams without recourse to prespecifying the number of cooperating individuals. To do so, each individual evolves a mapping to a distribution of outcomes that, following clustering, establishes the parameterization of a (Gaussian) local membership function. This gives individuals the opportunity to represent subsets of tasks, where the overall task is that of classification under the supervised learning domain. Thus, rather than each team member representing an entire class, individuals are free to identify unique subsets of the overall classification task. The framework is supported by techniques from evolutionary multiobjective optimization (EMO) and Pareto competitive coevolution. EMO establishes the basis for encouraging individuals to provide accurate yet nonoverlaping behaviors; whereas competitive coevolution provides the mechanism for scaling to potentially large unbalanced datasets. Benchmarking is performed against recent examples of nonlinear SVM classifiers over 12 UCI datasets with between 150 and 200,000 training instances. Solutions from the proposed coevolutionary multiobjective GP framework appear to provide a good balance between classification performance and model complexity, especially as the dataset instance count increases.

  18. Improving oil classification quality from oil spill fingerprint beyond six sigma approach.

    PubMed

    Juahir, Hafizan; Ismail, Azimah; Mohamed, Saiful Bahri; Toriman, Mohd Ekhwan; Kassim, Azlina Md; Zain, Sharifuddin Md; Ahmad, Wan Kamaruzaman Wan; Wah, Wong Kok; Zali, Munirah Abdul; Retnam, Ananthy; Taib, Mohd Zaki Mohd; Mokhtar, Mazlin

    2017-07-15

    This study involves the use of quality engineering in oil spill classification based on oil spill fingerprinting from GC-FID and GC-MS employing the six-sigma approach. The oil spills are recovered from various water areas of Peninsular Malaysia and Sabah (East Malaysia). The study approach used six sigma methodologies that effectively serve as the problem solving in oil classification extracted from the complex mixtures of oil spilled dataset. The analysis of six sigma link with the quality engineering improved the organizational performance to achieve its objectivity of the environmental forensics. The study reveals that oil spills are discriminated into four groups' viz. diesel, hydrocarbon fuel oil (HFO), mixture oil lubricant and fuel oil (MOLFO) and waste oil (WO) according to the similarity of the intrinsic chemical properties. Through the validation, it confirmed that four discriminant component, diesel, hydrocarbon fuel oil (HFO), mixture oil lubricant and fuel oil (MOLFO) and waste oil (WO) dominate the oil types with a total variance of 99.51% with ANOVA giving F stat >F critical at 95% confidence level and a Chi Square goodness test of 74.87. Results obtained from this study reveals that by employing six-sigma approach in a data-driven problem such as in the case of oil spill classification, good decision making can be expedited. Copyright © 2017. Published by Elsevier Ltd.

  19. Recursive Partitioning Analysis for New Classification of Patients With Esophageal Cancer Treated by Chemoradiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nomura, Motoo, E-mail: excell@hkg.odn.ne.jp; Department of Clinical Oncology, Aichi Cancer Center Hospital, Nagoya; Department of Radiation Oncology, Aichi Cancer Center Hospital, Nagoya

    2012-11-01

    Background: The 7th edition of the American Joint Committee on Cancer staging system does not include lymph node size in the guidelines for staging patients with esophageal cancer. The objectives of this study were to determine the prognostic impact of the maximum metastatic lymph node diameter (ND) on survival and to develop and validate a new staging system for patients with esophageal squamous cell cancer who were treated with definitive chemoradiotherapy (CRT). Methods: Information on 402 patients with esophageal cancer undergoing CRT at two institutions was reviewed. Univariate and multivariate analyses of data from one institution were used to assessmore » the impact of clinical factors on survival, and recursive partitioning analysis was performed to develop the new staging classification. To assess its clinical utility, the new classification was validated using data from the second institution. Results: By multivariate analysis, gender, T, N, and ND stages were independently and significantly associated with survival (p < 0.05). The resulting new staging classification was based on the T and ND. The four new stages led to good separation of survival curves in both the developmental and validation datasets (p < 0.05). Conclusions: Our results showed that lymph node size is a strong independent prognostic factor and that the new staging system, which incorporated lymph node size, provided good prognostic power, and discriminated effectively for patients with esophageal cancer undergoing CRT.« less

  20. An evaluation of classification systems for stillbirth

    PubMed Central

    Flenady, Vicki; Frøen, J Frederik; Pinar, Halit; Torabi, Rozbeh; Saastad, Eli; Guyon, Grace; Russell, Laurie; Charles, Adrian; Harrison, Catherine; Chauke, Lawrence; Pattinson, Robert; Koshy, Rachel; Bahrin, Safiah; Gardener, Glenn; Day, Katie; Petersson, Karin; Gordon, Adrienne; Gilshenan, Kristen

    2009-01-01

    Background Audit and classification of stillbirths is an essential part of clinical practice and a crucial step towards stillbirth prevention. Due to the limitations of the ICD system and lack of an international approach to an acceptable solution, numerous disparate classification systems have emerged. We assessed the performance of six contemporary systems to inform the development of an internationally accepted approach. Methods We evaluated the following systems: Amended Aberdeen, Extended Wigglesworth; PSANZ-PDC, ReCoDe, Tulip and CODAC. Nine teams from 7 countries applied the classification systems to cohorts of stillbirths from their regions using 857 stillbirth cases. The main outcome measures were: the ability to retain the important information about the death using the InfoKeep rating; the ease of use according to the Ease rating (both measures used a five-point scale with a score <2 considered unsatisfactory); inter-observer agreement and the proportion of unexplained stillbirths. A randomly selected subset of 100 stillbirths was used to assess inter-observer agreement. Results InfoKeep scores were significantly different across the classifications (p ≤ 0.01) due to low scores for Wigglesworth and Aberdeen. CODAC received the highest mean (SD) score of 3.40 (0.73) followed by PSANZ-PDC, ReCoDe and Tulip [2.77 (1.00), 2.36 (1.21), 1.92 (1.24) respectively]. Wigglesworth and Aberdeen resulted in a high proportion of unexplained stillbirths and CODAC and Tulip the lowest. While Ease scores were different (p ≤ 0.01), all systems received satisfactory scores; CODAC received the highest score. Aberdeen and Wigglesworth showed poor agreement with kappas of 0.35 and 0.25 respectively. Tulip performed best with a kappa of 0.74. The remainder had good to fair agreement. Conclusion The Extended Wigglesworth and Amended Aberdeen systems cannot be recommended for classification of stillbirths. Overall, CODAC performed best with PSANZ-PDC and ReCoDe performing well. Tulip was shown to have the best agreement and a low proportion of unexplained stillbirths. The virtues of these systems need to be considered in the development of an international solution to classification of stillbirths. Further studies are required on the performance of classification systems in the context of developing countries. Suboptimal agreement highlights the importance of instituting measures to ensure consistency for any classification system. PMID:19538759

  1. An evaluation of classification systems for stillbirth.

    PubMed

    Flenady, Vicki; Frøen, J Frederik; Pinar, Halit; Torabi, Rozbeh; Saastad, Eli; Guyon, Grace; Russell, Laurie; Charles, Adrian; Harrison, Catherine; Chauke, Lawrence; Pattinson, Robert; Koshy, Rachel; Bahrin, Safiah; Gardener, Glenn; Day, Katie; Petersson, Karin; Gordon, Adrienne; Gilshenan, Kristen

    2009-06-19

    Audit and classification of stillbirths is an essential part of clinical practice and a crucial step towards stillbirth prevention. Due to the limitations of the ICD system and lack of an international approach to an acceptable solution, numerous disparate classification systems have emerged. We assessed the performance of six contemporary systems to inform the development of an internationally accepted approach. We evaluated the following systems: Amended Aberdeen, Extended Wigglesworth; PSANZ-PDC, ReCoDe, Tulip and CODAC. Nine teams from 7 countries applied the classification systems to cohorts of stillbirths from their regions using 857 stillbirth cases. The main outcome measures were: the ability to retain the important information about the death using the InfoKeep rating; the ease of use according to the Ease rating (both measures used a five-point scale with a score <2 considered unsatisfactory); inter-observer agreement and the proportion of unexplained stillbirths. A randomly selected subset of 100 stillbirths was used to assess inter-observer agreement. InfoKeep scores were significantly different across the classifications (p < or = 0.01) due to low scores for Wigglesworth and Aberdeen. CODAC received the highest mean (SD) score of 3.40 (0.73) followed by PSANZ-PDC, ReCoDe and Tulip [2.77 (1.00), 2.36 (1.21), 1.92 (1.24) respectively]. Wigglesworth and Aberdeen resulted in a high proportion of unexplained stillbirths and CODAC and Tulip the lowest. While Ease scores were different (p < or = 0.01), all systems received satisfactory scores; CODAC received the highest score. Aberdeen and Wigglesworth showed poor agreement with kappas of 0.35 and 0.25 respectively. Tulip performed best with a kappa of 0.74. The remainder had good to fair agreement. The Extended Wigglesworth and Amended Aberdeen systems cannot be recommended for classification of stillbirths. Overall, CODAC performed best with PSANZ-PDC and ReCoDe performing well. Tulip was shown to have the best agreement and a low proportion of unexplained stillbirths. The virtues of these systems need to be considered in the development of an international solution to classification of stillbirths. Further studies are required on the performance of classification systems in the context of developing countries. Suboptimal agreement highlights the importance of instituting measures to ensure consistency for any classification system.

  2. Comparison of Feature Selection Techniques in Machine Learning for Anatomical Brain MRI in Dementia.

    PubMed

    Tohka, Jussi; Moradi, Elaheh; Huttunen, Heikki

    2016-07-01

    We present a comparative split-half resampling analysis of various data driven feature selection and classification methods for the whole brain voxel-based classification analysis of anatomical magnetic resonance images. We compared support vector machines (SVMs), with or without filter based feature selection, several embedded feature selection methods and stability selection. While comparisons of the accuracy of various classification methods have been reported previously, the variability of the out-of-training sample classification accuracy and the set of selected features due to independent training and test sets have not been previously addressed in a brain imaging context. We studied two classification problems: 1) Alzheimer's disease (AD) vs. normal control (NC) and 2) mild cognitive impairment (MCI) vs. NC classification. In AD vs. NC classification, the variability in the test accuracy due to the subject sample did not vary between different methods and exceeded the variability due to different classifiers. In MCI vs. NC classification, particularly with a large training set, embedded feature selection methods outperformed SVM-based ones with the difference in the test accuracy exceeding the test accuracy variability due to the subject sample. The filter and embedded methods produced divergent feature patterns for MCI vs. NC classification that suggests the utility of the embedded feature selection for this problem when linked with the good generalization performance. The stability of the feature sets was strongly correlated with the number of features selected, weakly correlated with the stability of classification accuracy, and uncorrelated with the average classification accuracy.

  3. Comparison of Random Forest and Support Vector Machine classifiers using UAV remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Piragnolo, Marco; Masiero, Andrea; Pirotti, Francesco

    2017-04-01

    Since recent years surveying with unmanned aerial vehicles (UAV) is getting a great amount of attention due to decreasing costs, higher precision and flexibility of usage. UAVs have been applied for geomorphological investigations, forestry, precision agriculture, cultural heritage assessment and for archaeological purposes. It can be used for land use and land cover classification (LULC). In literature, there are two main types of approaches for classification of remote sensing imagery: pixel-based and object-based. On one hand, pixel-based approach mostly uses training areas to define classes and respective spectral signatures. On the other hand, object-based classification considers pixels, scale, spatial information and texture information for creating homogeneous objects. Machine learning methods have been applied successfully for classification, and their use is increasing due to the availability of faster computing capabilities. The methods learn and train the model from previous computation. Two machine learning methods which have given good results in previous investigations are Random Forest (RF) and Support Vector Machine (SVM). The goal of this work is to compare RF and SVM methods for classifying LULC using images collected with a fixed wing UAV. The processing chain regarding classification uses packages in R, an open source scripting language for data analysis, which provides all necessary algorithms. The imagery was acquired and processed in November 2015 with cameras providing information over the red, blue, green and near infrared wavelength reflectivity over a testing area in the campus of Agripolis, in Italy. Images were elaborated and ortho-rectified through Agisoft Photoscan. The ortho-rectified image is the full data set, and the test set is derived from partial sub-setting of the full data set. Different tests have been carried out, using a percentage from 2 % to 20 % of the total. Ten training sets and ten validation sets are obtained from each test set. The control dataset consist of an independent visual classification done by an expert over the whole area. The classes are (i) broadleaf, (ii) building, (iii) grass, (iv) headland access path, (v) road, (vi) sowed land, (vii) vegetable. The RF and SVM are applied to the test set. The performances of the methods are evaluated using the three following accuracy metrics: Kappa index, Classification accuracy and Classification Error. All three are calculated in three different ways: with K-fold cross validation, using the validation test set and using the full test set. The analysis indicates that SVM gets better results in terms of good scores using K-fold cross or validation test set. Using the full test set, RF achieves a better result in comparison to SVM. It also seems that SVM performs better with smaller training sets, whereas RF performs better as training sets get larger.

  4. [Classification and treatment of symbrachydactyly. A series of 117 cases].

    PubMed

    Foucher, G; Medina, J; Pajardi, G; Navarro, R

    2000-07-01

    In the present study, a modification has been proposed of the Blauth and Gekeler classification, aimed at a more accurate definition of appropriate surgical treatment. An analysis was made of a series of 120 cases of symbrachydactyly (117 patients); however, surgery was only performed in 86 cases (51 toe transfers in 49 patients; mean age at surgery 12 months). Type I included the separation of short and sometimes stiff fingers; type II, the 'pseudo-cleft', could be subdivided into three groups. Type IIA included those hands with more than two long and frequently hypoplastic digits, regarding which a decision had to be made between removal of rudimentary fingers or their stabilization. In type IIB, hand function was good and surgery was rarely needed. Type III (monodactylous) could also be subdivided into two categories, i.e., normal thumb in type IIIA and hypoplasia in IIIB. Finally, in type IVA, toe transfer surgery was performed on condition that wrist mobility was sufficient to compensate for the insufficient mobility of the artificial thumb on the anterior aspect of the radius. In all cases, a weak but useful pincer movement was obtained, with poor cosmetic results. In the case of toe transfers, surgery was advocated before the age of one year; and although mobility was disappointing (35 degrees active motion), good growth and excellent discrimination (5 mm on average) was observed. Symbrachydactyly is a fairly frequent congenital malformation; its diverse clinical features require a precise classification to better determine adequate treatment management.

  5. X-ray agricultural product inspection: segmentation and classification

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Talukder, Ashit; Lee, Ha-Woon

    1997-09-01

    Processing of real-time x-ray images of randomly oriented and touching pistachio nuts for product inspection is considered. We describe the image processing used to isolate individual nuts (segmentation). This involves a new watershed transform algorithm. Segmentation results on approximately 3000 x-ray (film) and real time x-ray (linescan) nut images were excellent (greater than 99.9% correct). Initial classification results on film images are presented that indicate that the percentage of infested nuts can be reduced to 1.6% of the crop with only 2% of the good nuts rejected; this performance is much better than present manual methods and other automated classifiers have achieved.

  6. Using clustering and a modified classification algorithm for automatic text summarization

    NASA Astrophysics Data System (ADS)

    Aries, Abdelkrime; Oufaida, Houda; Nouali, Omar

    2013-01-01

    In this paper we describe a modified classification method destined for extractive summarization purpose. The classification in this method doesn't need a learning corpus; it uses the input text to do that. First, we cluster the document sentences to exploit the diversity of topics, then we use a learning algorithm (here we used Naive Bayes) on each cluster considering it as a class. After obtaining the classification model, we calculate the score of a sentence in each class, using a scoring model derived from classification algorithm. These scores are used, then, to reorder the sentences and extract the first ones as the output summary. We conducted some experiments using a corpus of scientific papers, and we have compared our results to another summarization system called UNIS.1 Also, we experiment the impact of clustering threshold tuning, on the resulted summary, as well as the impact of adding more features to the classifier. We found that this method is interesting, and gives good performance, and the addition of new features (which is simple using this method) can improve summary's accuracy.

  7. 29 CFR 779.353 - Basis for classification.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 3 2013-07-01 2013-07-01 false Basis for classification. 779.353 Section 779.353 Labor... RETAILERS OF GOODS OR SERVICES Exemptions for Certain Retail or Service Establishments Classification of Sales and Establishments in Certain Industries § 779.353 Basis for classification. The general...

  8. Improving the Robustness of Real-Time Myoelectric Pattern Recognition against Arm Position Changes in Transradial Amputees

    PubMed Central

    Geng, Yanjuan; Wei, Yue

    2017-01-01

    Previous studies have showed that arm position variations would significantly degrade the classification performance of myoelectric pattern-recognition-based prosthetic control, and the cascade classifier (CC) and multiposition classifier (MPC) have been proposed to minimize such degradation in offline scenarios. However, it remains unknown whether these proposed approaches could also perform well in the clinical use of a multifunctional prosthesis control. In this study, the online effect of arm position variation on motion identification was evaluated by using a motion-test environment (MTE) developed to mimic the real-time control of myoelectric prostheses. The performance of different classifier configurations in reducing the impact of arm position variation was investigated using four real-time metrics based on dataset obtained from transradial amputees. The results of this study showed that, compared to the commonly used motion classification method, the CC and MPC configurations improved the real-time performance across seven classes of movements in five different arm positions (8.7% and 12.7% increments of motion completion rate, resp.). The results also indicated that high offline classification accuracy might not ensure good real-time performance under variable arm positions, which necessitated the investigation of the real-time control performance to gain proper insight on the clinical implementation of EMG-pattern-recognition-based controllers for limb amputees. PMID:28523276

  9. Quantum Algorithm for K-Nearest Neighbors Classification Based on the Metric of Hamming Distance

    NASA Astrophysics Data System (ADS)

    Ruan, Yue; Xue, Xiling; Liu, Heng; Tan, Jianing; Li, Xi

    2017-11-01

    K-nearest neighbors (KNN) algorithm is a common algorithm used for classification, and also a sub-routine in various complicated machine learning tasks. In this paper, we presented a quantum algorithm (QKNN) for implementing this algorithm based on the metric of Hamming distance. We put forward a quantum circuit for computing Hamming distance between testing sample and each feature vector in the training set. Taking advantage of this method, we realized a good analog for classical KNN algorithm by setting a distance threshold value t to select k - n e a r e s t neighbors. As a result, QKNN achieves O( n 3) performance which is only relevant to the dimension of feature vectors and high classification accuracy, outperforms Llyod's algorithm (Lloyd et al. 2013) and Wiebe's algorithm (Wiebe et al. 2014).

  10. A Deep Learning Architecture for Temporal Sleep Stage Classification Using Multivariate and Multimodal Time Series.

    PubMed

    Chambon, Stanislas; Galtier, Mathieu N; Arnal, Pierrick J; Wainrib, Gilles; Gramfort, Alexandre

    2018-04-01

    Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30 s of the signal of a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEGs), electrooculograms (EOGs), electrocardiograms, and electromyograms (EMGs). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting handcrafted features, that exploits all multivariate and multimodal polysomnography (PSG) signals (EEG, EMG, and EOG), and that can exploit the temporal context of each 30-s window of data. For each modality, the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields the state-of-the-art performance. Our study reveals a number of insights on the spatiotemporal distribution of the signal of interest: a good tradeoff for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting 1 min of data before and after each data segment offers the strongest improvement when a limited number of channels are available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver the state-of-the-art classification performance with a small computational cost.

  11. A survey of the dummy face and human face stimuli used in BCI paradigm.

    PubMed

    Chen, Long; Jin, Jing; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2015-01-15

    It was proved that the human face stimulus were superior to the flash only stimulus in BCI system. However, human face stimulus may lead to copyright infringement problems and was hard to be edited according to the requirement of the BCI study. Recently, it was reported that facial expression changes could be done by changing a curve in a dummy face which could obtain good performance when it was applied to visual-based P300 BCI systems. In this paper, four different paradigms were presented, which were called dummy face pattern, human face pattern, inverted dummy face pattern and inverted human face pattern, to evaluate the performance of the dummy faces stimuli compared with the human faces stimuli. The key point that determined the value of dummy faces in BCI systems were whether dummy faces stimuli could obtain as good performance as human faces stimuli. Online and offline results of four different paradigms would have been obtained and comparatively analyzed. Online and offline results showed that there was no significant difference among dummy faces and human faces in ERPs, classification accuracy and information transfer rate when they were applied in BCI systems. Dummy faces stimuli could evoke large ERPs and obtain as high classification accuracy and information transfer rate as the human faces stimuli. Since dummy faces were easy to be edited and had no copyright infringement problems, it would be a good choice for optimizing the stimuli of BCI systems. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Improving Classification Performance through an Advanced Ensemble Based Heterogeneous Extreme Learning Machines.

    PubMed

    Abuassba, Adnan O M; Zhang, Dezheng; Luo, Xiong; Shaheryar, Ahmad; Ali, Hazrat

    2017-01-01

    Extreme Learning Machine (ELM) is a fast-learning algorithm for a single-hidden layer feedforward neural network (SLFN). It often has good generalization performance. However, there are chances that it might overfit the training data due to having more hidden nodes than needed. To address the generalization performance, we use a heterogeneous ensemble approach. We propose an Advanced ELM Ensemble (AELME) for classification, which includes Regularized-ELM, L 2 -norm-optimized ELM (ELML2), and Kernel-ELM. The ensemble is constructed by training a randomly chosen ELM classifier on a subset of training data selected through random resampling. The proposed AELM-Ensemble is evolved by employing an objective function of increasing diversity and accuracy among the final ensemble. Finally, the class label of unseen data is predicted using majority vote approach. Splitting the training data into subsets and incorporation of heterogeneous ELM classifiers result in higher prediction accuracy, better generalization, and a lower number of base classifiers, as compared to other models (Adaboost, Bagging, Dynamic ELM ensemble, data splitting ELM ensemble, and ELM ensemble). The validity of AELME is confirmed through classification on several real-world benchmark datasets.

  13. Improving Classification Performance through an Advanced Ensemble Based Heterogeneous Extreme Learning Machines

    PubMed Central

    Abuassba, Adnan O. M.; Ali, Hazrat

    2017-01-01

    Extreme Learning Machine (ELM) is a fast-learning algorithm for a single-hidden layer feedforward neural network (SLFN). It often has good generalization performance. However, there are chances that it might overfit the training data due to having more hidden nodes than needed. To address the generalization performance, we use a heterogeneous ensemble approach. We propose an Advanced ELM Ensemble (AELME) for classification, which includes Regularized-ELM, L2-norm-optimized ELM (ELML2), and Kernel-ELM. The ensemble is constructed by training a randomly chosen ELM classifier on a subset of training data selected through random resampling. The proposed AELM-Ensemble is evolved by employing an objective function of increasing diversity and accuracy among the final ensemble. Finally, the class label of unseen data is predicted using majority vote approach. Splitting the training data into subsets and incorporation of heterogeneous ELM classifiers result in higher prediction accuracy, better generalization, and a lower number of base classifiers, as compared to other models (Adaboost, Bagging, Dynamic ELM ensemble, data splitting ELM ensemble, and ELM ensemble). The validity of AELME is confirmed through classification on several real-world benchmark datasets. PMID:28546808

  14. Classification of Multiple Chinese Liquors by Means of a QCM-based E-Nose and MDS-SVM Classifier.

    PubMed

    Li, Qiang; Gu, Yu; Jia, Jing

    2017-01-30

    Chinese liquors are internationally well-known fermentative alcoholic beverages. They have unique flavors attributable to the use of various bacteria and fungi, raw materials, and production processes. Developing a novel, rapid, and reliable method to identify multiple Chinese liquors is of positive significance. This paper presents a pattern recognition system for classifying ten brands of Chinese liquors based on multidimensional scaling (MDS) and support vector machine (SVM) algorithms in a quartz crystal microbalance (QCM)-based electronic nose (e-nose) we designed. We evaluated the comprehensive performance of the MDS-SVM classifier that predicted all ten brands of Chinese liquors individually. The prediction accuracy (98.3%) showed superior performance of the MDS-SVM classifier over the back-propagation artificial neural network (BP-ANN) classifier (93.3%) and moving average-linear discriminant analysis (MA-LDA) classifier (87.6%). The MDS-SVM classifier has reasonable reliability, good fitting and prediction (generalization) performance in classification of the Chinese liquors. Taking both application of the e-nose and validation of the MDS-SVM classifier into account, we have thus created a useful method for the classification of multiple Chinese liquors.

  15. Optical recognition of statistical patterns

    NASA Astrophysics Data System (ADS)

    Lee, S. H.

    1981-12-01

    Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.

  16. Optical recognition of statistical patterns

    NASA Technical Reports Server (NTRS)

    Lee, S. H.

    1981-01-01

    Optical implementation of the Fukunaga-Koontz transform (FKT) and the Least-Squares Linear Mapping Technique (LSLMT) is described. The FKT is a linear transformation which performs image feature extraction for a two-class image classification problem. The LSLMT performs a transform from large dimensional feature space to small dimensional decision space for separating multiple image classes by maximizing the interclass differences while minimizing the intraclass variations. The FKT and the LSLMT were optically implemented by utilizing a coded phase optical processor. The transform was used for classifying birds and fish. After the F-K basis functions were calculated, those most useful for classification were incorporated into a computer generated hologram. The output of the optical processor, consisting of the squared magnitude of the F-K coefficients, was detected by a T.V. camera, digitized, and fed into a micro-computer for classification. A simple linear classifier based on only two F-K coefficients was able to separate the images into two classes, indicating that the F-K transform had chosen good features. Two advantages of optically implementing the FKT and LSLMT are parallel and real time processing.

  17. Automatic Cataract Hardness Classification Ex Vivo by Ultrasound Techniques.

    PubMed

    Caixinha, Miguel; Santos, Mário; Santos, Jaime

    2016-04-01

    To demonstrate the feasibility of a new methodology for cataract hardness characterization and automatic classification using ultrasound techniques, different cataract degrees were induced in 210 porcine lenses. A 25-MHz ultrasound transducer was used to obtain acoustical parameters (velocity and attenuation) and backscattering signals. B-Scan and parametric Nakagami images were constructed. Ninety-seven parameters were extracted and subjected to a Principal Component Analysis. Bayes, K-Nearest-Neighbours, Fisher Linear Discriminant and Support Vector Machine (SVM) classifiers were used to automatically classify the different cataract severities. Statistically significant increases with cataract formation were found for velocity, attenuation, mean brightness intensity of the B-Scan images and mean Nakagami m parameter (p < 0.01). The four classifiers showed a good performance for healthy versus cataractous lenses (F-measure ≥ 92.68%), while for initial versus severe cataracts the SVM classifier showed the higher performance (90.62%). The results showed that ultrasound techniques can be used for non-invasive cataract hardness characterization and automatic classification. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  18. Suitability of the isolated chicken eye test for classification of extreme pH detergents and cleaning products.

    PubMed

    Cazelle, Elodie; Eskes, Chantra; Hermann, Martina; Jones, Penny; McNamee, Pauline; Prinsen, Menk; Taylor, Hannah; Wijnands, Marcel V W

    2015-04-01

    A.I.S.E. investigated the suitability of the regulatory adopted ICE in vitro test method (OECD TG 438) with or without histopathology to identify detergent and cleaning formulations having extreme pH that require classification as EU CLP/UN GHS Category 1. To this aim, 18 extreme pH detergent and cleaning formulations were tested covering both alkaline and acidic extreme pHs. The ICE standard test method following OECD Test Guideline 438 showed good concordance with in vivo classification (83%) and good and balanced specificity and sensitivity values (83%) which are in line with the performances of currently adopted in vitro test guidelines, confirming its suitability to identify Category 1 extreme pH detergent and cleaning products. In contrast to previous findings obtained with non-extreme pH formulations, the use of histopathology did not improve the sensitivity of the assay whilst it strongly decreased its specificity for the extreme pH formulations. Furthermore, use of non-testing prediction rules for classification showed poor concordance values (33% for the extreme pH rule and 61% for the EU CLP additivity approach) with high rates of over-prediction (100% for the extreme pH rule and 50% for the additivity approach), indicating that these non-testing prediction rules are not suitable to predict Category 1 hazards of extreme pH detergent and cleaning formulations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Efficient Implementation of High Order Inverse Lax-Wendroff Boundary Treatment for Conservation Laws

    DTIC Science & Technology

    2011-07-15

    with or without source terms representing chemical reactions in detonations . The results demonstrate the designed fifth order accuracy, stability, and...good performance for problems involving complicated interactions between detonation /shock waves and solid boundaries. AMS subject classification... detonation ; no-penetration con- ditions 1Division of Applied Mathematics, Brown University, Providence, RI 02912. E-mail: sirui@dam.brown.edu. 2State Key

  20. Automatic Modulation Classification of Common Communication and Pulse Compression Radar Waveforms using Cyclic Features

    DTIC Science & Technology

    2013-03-01

    intermediate frequency LFM linear frequency modulation MAP maximum a posteriori MATLAB® matrix laboratory ML maximun likelihood OFDM orthogonal frequency...spectrum, frequency hopping, and orthogonal frequency division multiplexing ( OFDM ) modulations. Feature analysis would be a good research thrust to...determine feature relevance and decide if removing any features improves performance. Also, extending the system for simulations using a MIMO receiver or

  1. Classifying publications from the clinical and translational science award program along the translational research spectrum: a machine learning approach.

    PubMed

    Surkis, Alisa; Hogle, Janice A; DiazGranados, Deborah; Hunt, Joe D; Mazmanian, Paul E; Connors, Emily; Westaby, Kate; Whipple, Elizabeth C; Adamus, Trisha; Mueller, Meridith; Aphinyanaphongs, Yindalon

    2016-08-05

    Translational research is a key area of focus of the National Institutes of Health (NIH), as demonstrated by the substantial investment in the Clinical and Translational Science Award (CTSA) program. The goal of the CTSA program is to accelerate the translation of discoveries from the bench to the bedside and into communities. Different classification systems have been used to capture the spectrum of basic to clinical to population health research, with substantial differences in the number of categories and their definitions. Evaluation of the effectiveness of the CTSA program and of translational research in general is hampered by the lack of rigor in these definitions and their application. This study adds rigor to the classification process by creating a checklist to evaluate publications across the translational spectrum and operationalizes these classifications by building machine learning-based text classifiers to categorize these publications. Based on collaboratively developed definitions, we created a detailed checklist for categories along the translational spectrum from T0 to T4. We applied the checklist to CTSA-linked publications to construct a set of coded publications for use in training machine learning-based text classifiers to classify publications within these categories. The training sets combined T1/T2 and T3/T4 categories due to low frequency of these publication types compared to the frequency of T0 publications. We then compared classifier performance across different algorithms and feature sets and applied the classifiers to all publications in PubMed indexed to CTSA grants. To validate the algorithm, we manually classified the articles with the top 100 scores from each classifier. The definitions and checklist facilitated classification and resulted in good inter-rater reliability for coding publications for the training set. Very good performance was achieved for the classifiers as represented by the area under the receiver operating curves (AUC), with an AUC of 0.94 for the T0 classifier, 0.84 for T1/T2, and 0.92 for T3/T4. The combination of definitions agreed upon by five CTSA hubs, a checklist that facilitates more uniform definition interpretation, and algorithms that perform well in classifying publications along the translational spectrum provide a basis for establishing and applying uniform definitions of translational research categories. The classification algorithms allow publication analyses that would not be feasible with manual classification, such as assessing the distribution and trends of publications across the CTSA network and comparing the categories of publications and their citations to assess knowledge transfer across the translational research spectrum.

  2. Fast classification of hazelnut cultivars through portable infrared spectroscopy and chemometrics

    NASA Astrophysics Data System (ADS)

    Manfredi, Marcello; Robotti, Elisa; Quasso, Fabio; Mazzucco, Eleonora; Calabrese, Giorgio; Marengo, Emilio

    2018-01-01

    The authentication and traceability of hazelnuts is very important for both the consumer and the food industry, to safeguard the protected varieties and the food quality. This study investigates the use of a portable FTIR spectrometer coupled to multivariate statistical analysis for the classification of raw hazelnuts. The method discriminates hazelnuts from different origins/cultivars based on differences of the signal intensities of their IR spectra. The multivariate classification methods, namely principal component analysis (PCA) followed by linear discriminant analysis (LDA) and partial least square discriminant analysis (PLS-DA), with or without variable selection, allowed a very good discrimination among the groups, with PLS-DA coupled to variable selection providing the best results. Due to the fast analysis, high sensitivity, simplicity and no sample preparation, the proposed analytical methodology could be successfully used to verify the cultivar of hazelnuts, and the analysis can be performed quickly and directly on site.

  3. Classification of cancerous cells based on the one-class problem approach

    NASA Astrophysics Data System (ADS)

    Murshed, Nabeel A.; Bortolozzi, Flavio; Sabourin, Robert

    1996-03-01

    One of the most important factors in reducing the effect of cancerous diseases is the early diagnosis, which requires a good and a robust method. With the advancement of computer technologies and digital image processing, the development of a computer-based system has become feasible. In this paper, we introduce a new approach for the detection of cancerous cells. This approach is based on the one-class problem approach, through which the classification system need only be trained with patterns of cancerous cells. This reduces the burden of the training task by about 50%. Based on this approach, a computer-based classification system is developed, based on the Fuzzy ARTMAP neural networks. Experimental results were performed using a set of 542 patterns taken from a sample of breast cancer. Results of the experiment show 98% correct identification of cancerous cells and 95% correct identification of non-cancerous cells.

  4. Low Altitude AVIRIS Data for Mapping Land Cover in Yellowstone National Park: Use of Isodata Clustering Techniques

    NASA Technical Reports Server (NTRS)

    Spruce, Joe

    2001-01-01

    Yellowstone National Park (YNP) contains a diversity of land cover. YNP managers need site-specific land cover maps, which may be produced more effectively using high-resolution hyperspectral imagery. ISODATA clustering techniques have aided operational multispectral image classification and may benefit certain hyperspectral data applications if optimally applied. In response, a study was performed for an area in northeast YNP using 11 select bands of low-altitude AVIRIS data calibrated to ground reflectance. These data were subjected to ISODATA clustering and Maximum Likelihood Classification techniques to produce a moderately detailed land cover map. The latter has good apparent overall agreement with field surveys and aerial photo interpretation.

  5. Microsurgical reconstruction of large nerve defects using autologous nerve grafts.

    PubMed

    Daoutis, N K; Gerostathopoulos, N E; Efstathopoulos, D G; Misitizis, D P; Bouchlis, G N; Anagnostou, S K

    1994-01-01

    Between 1986 and 1993, 643 patients with peripheral nerve trauma were treated in our clinic. Primary neurorraphy was performed in 431 of these patients and nerve grafting in 212 patients. We present the functional results after nerve grafting in 93 patients with large nerve defects who were followed for more than 2 years. Evaluation of function was based on the Medical Research Council (MRC) classification for motor and sensory recovery. Factors affecting functional outcome, such as age of the patient, denervation time, length of the defect, and level of the injury were noted. Good results according to the MRC classification were obtained in the majority of cases, although function remained less than that of the uninjured side.

  6. Boosting bonsai trees for handwritten/printed text discrimination

    NASA Astrophysics Data System (ADS)

    Ricquebourg, Yann; Raymond, Christian; Poirriez, Baptiste; Lemaitre, Aurélie; Coüasnon, Bertrand

    2013-12-01

    Boosting over decision-stumps proved its efficiency in Natural Language Processing essentially with symbolic features, and its good properties (fast, few and not critical parameters, not sensitive to over-fitting) could be of great interest in the numeric world of pixel images. In this article we investigated the use of boosting over small decision trees, in image classification processing, for the discrimination of handwritten/printed text. Then, we conducted experiments to compare it to usual SVM-based classification revealing convincing results with very close performance, but with faster predictions and behaving far less as a black-box. Those promising results tend to make use of this classifier in more complex recognition tasks like multiclass problems.

  7. Enlisted MOS Suitable for the Physically Handicapped

    DTIC Science & Technology

    1958-12-01

    highly motivated in a mobili- zation situation. The pr.mary reason for using this approach was the need for a standardized classification system which...the classification process. The solution was to consider all disabili- ties as "severe" and motivation to perform "good." Thus any individual disabled...MC+e0)H 4E-4)OOHe~~~ A A A P44 O0O)HCeHl H’d e (d *Hc.3d,4-4 HO) HQ .r4 to’- L 4 Xe 𔃺- H r4 Ps4 p) HO -i -i--I<-) rdr r d 90 P 4-- >< d0I - d a)0SoSS

  8. LICRE: unsupervised feature correlation reduction for lipidomics.

    PubMed

    Wong, Gerard; Chan, Jeffrey; Kingwell, Bronwyn A; Leckie, Christopher; Meikle, Peter J

    2014-10-01

    Recent advances in high-throughput lipid profiling by liquid chromatography electrospray ionization tandem mass spectrometry (LC-ESI-MS/MS) have made it possible to quantify hundreds of individual molecular lipid species (e.g. fatty acyls, glycerolipids, glycerophospholipids, sphingolipids) in a single experimental run for hundreds of samples. This enables the lipidome of large cohorts of subjects to be profiled to identify lipid biomarkers significantly associated with disease risk, progression and treatment response. Clinically, these lipid biomarkers can be used to construct classification models for the purpose of disease screening or diagnosis. However, the inclusion of a large number of highly correlated biomarkers within a model may reduce classification performance, unnecessarily inflate associated costs of a diagnosis or a screen and reduce the feasibility of clinical translation. An unsupervised feature reduction approach can reduce feature redundancy in lipidomic biomarkers by limiting the number of highly correlated lipids while retaining informative features to achieve good classification performance for various clinical outcomes. Good predictive models based on a reduced number of biomarkers are also more cost effective and feasible from a clinical translation perspective. The application of LICRE to various lipidomic datasets in diabetes and cardiovascular disease demonstrated superior discrimination in terms of the area under the receiver operator characteristic curve while using fewer lipid markers when predicting various clinical outcomes. The MATLAB implementation of LICRE is available from http://ww2.cs.mu.oz.au/∼gwong/LICRE © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Intra- and Interobserver Reliability of Three Classification Systems for Hallux Rigidus.

    PubMed

    Dillard, Sarita; Schilero, Christina; Chiang, Sharon; Pham, Peter

    2018-04-18

    There are over ten classification systems currently used in the staging of hallux rigidus. This results in confusion and inconsistency with radiographic interpretation and treatment. The reliability of hallux rigidus classification systems has not yet been tested. The purpose of this study was to evaluate intra- and interobserver reliability using three commonly used classifications for hallux rigidus. Twenty-one plain radiograph sets were presented to ten ACFAS board-certified foot and ankle surgeons. Each physician classified each radiograph based on clinical experience and knowledge according to the Regnauld, Roukis, and Hattrup and Johnson classification systems. The two-way mixed single-measure consistency intraclass correlation was used to calculate intra- and interrater reliability. The intrarater reliability of individual sets for the Roukis and Hattrup and Johnson classification systems was "fair to good" (Roukis, 0.62±0.19; Hattrup and Johnson, 0.62±0.28), whereas the intrarater reliability of individual sets for the Regnauld system bordered between "fair to good" and "poor" (0.43±0.24). The interrater reliability of the mean classification was "excellent" for all three classification systems. Conclusions Reliable and reproducible classification systems are essential for treatment and prognostic implications in hallux rigidus. In our study, Roukis classification system had the best intrarater reliability. Although there are various classification systems for hallux rigidus, our results indicate that all three of these classification systems show reliability and reproducibility.

  10. Shape Modification and Size Classification of Microcrystalline Graphite Powder as Anode Material for Lithium-Ion Batteries

    NASA Astrophysics Data System (ADS)

    Wang, Cong; Gai, Guosheng; Yang, Yufen

    2018-03-01

    Natural microcrystalline graphite (MCG) composed of many crystallites is a promising new anode material for lithium-ion batteries (LiBs) and has received considerable attention from researchers. MCG with narrow particle size distribution and high sphericity exhibits excellent electrochemical performance. A nonaddition process to prepare natural MCG as a high-performance LiB anode material is described. First, raw MCG was broken into smaller particles using a pulverization system. Then, the particles were modified into near-spherical shape using a particle shape modification system. Finally, the particle size distribution was narrowed using a centrifugal rotor classification system. The products with uniform hemispherical shape and narrow size distribution had mean particle size of approximately 9 μm, 10 μm, 15 μm, and 20 μm. Additionally, the innovative pilot experimental process increased the product yield of the raw material. Finally, the electrochemical performance of the prepared MCG was tested, revealing high reversible capacity and good cyclability.

  11. Automatic Analysis of Pronunciations for Children with Speech Sound Disorders.

    PubMed

    Dudy, Shiran; Bedrick, Steven; Asgari, Meysam; Kain, Alexander

    2018-07-01

    Computer-Assisted Pronunciation Training (CAPT) systems aim to help a child learn the correct pronunciations of words. However, while there are many online commercial CAPT apps, there is no consensus among Speech Language Therapists (SLPs) or non-professionals about which CAPT systems, if any, work well. The prevailing assumption is that practicing with such programs is less reliable and thus does not provide the feedback necessary to allow children to improve their performance. The most common method for assessing pronunciation performance is the Goodness of Pronunciation (GOP) technique. Our paper proposes two new GOP techniques. We have found that pronunciation models that use explicit knowledge about error pronunciation patterns can lead to more accurate classification whether a phoneme was correctly pronounced or not. We evaluate the proposed pronunciation assessment methods against a baseline state of the art GOP approach, and show that the proposed techniques lead to classification performance that is more similar to that of a human expert.

  12. Hyperspectral Image Classification for Land Cover Based on an Improved Interval Type-II Fuzzy C-Means Approach

    PubMed Central

    Li, Zhao-Liang

    2018-01-01

    Few studies have examined hyperspectral remote-sensing image classification with type-II fuzzy sets. This paper addresses image classification based on a hyperspectral remote-sensing technique using an improved interval type-II fuzzy c-means (IT2FCM*) approach. In this study, in contrast to other traditional fuzzy c-means-based approaches, the IT2FCM* algorithm considers the ranking of interval numbers and the spectral uncertainty. The classification results based on a hyperspectral dataset using the FCM, IT2FCM, and the proposed improved IT2FCM* algorithms show that the IT2FCM* method plays the best performance according to the clustering accuracy. In this paper, in order to validate and demonstrate the separability of the IT2FCM*, four type-I fuzzy validity indexes are employed, and a comparative analysis of these fuzzy validity indexes also applied in FCM and IT2FCM methods are made. These four indexes are also applied into different spatial and spectral resolution datasets to analyze the effects of spectral and spatial scaling factors on the separability of FCM, IT2FCM, and IT2FCM* methods. The results of these validity indexes from the hyperspectral datasets show that the improved IT2FCM* algorithm have the best values among these three algorithms in general. The results demonstrate that the IT2FCM* exhibits good performance in hyperspectral remote-sensing image classification because of its ability to handle hyperspectral uncertainty. PMID:29373548

  13. Quality grading of Atlantic salmon (Salmo salar) by computer vision.

    PubMed

    Misimi, E; Erikson, U; Skavhaug, A

    2008-06-01

    In this study, we present a promising method of computer vision-based quality grading of whole Atlantic salmon (Salmo salar). Using computer vision, it was possible to differentiate among different quality grades of Atlantic salmon based on the external geometrical information contained in the fish images. Initially, before the image acquisition, the fish were subjectively graded and labeled into grading classes by a qualified human inspector in the processing plant. Prior to classification, the salmon images were segmented into binary images, and then feature extraction was performed on the geometrical parameters of the fish from the grading classes. The classification algorithm was a threshold-based classifier, which was designed using linear discriminant analysis. The performance of the classifier was tested by using the leave-one-out cross-validation method, and the classification results showed a good agreement between the classification done by human inspectors and by the computer vision. The computer vision-based method classified correctly 90% of the salmon from the data set as compared with the classification by human inspector. Overall, it was shown that computer vision can be used as a powerful tool to grade Atlantic salmon into quality grades in a fast and nondestructive manner by a relatively simple classifier algorithm. The low cost of implementation of today's advanced computer vision solutions makes this method feasible for industrial purposes in fish plants as it can replace manual labor, on which grading tasks still rely.

  14. A comparison of autonomous techniques for multispectral image analysis and classification

    NASA Astrophysics Data System (ADS)

    Valdiviezo-N., Juan C.; Urcid, Gonzalo; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso

    2012-10-01

    Multispectral imaging has given place to important applications related to classification and identification of objects from a scene. Because of multispectral instruments can be used to estimate the reflectance of materials in the scene, these techniques constitute fundamental tools for materials analysis and quality control. During the last years, a variety of algorithms has been developed to work with multispectral data, whose main purpose has been to perform the correct classification of the objects in the scene. The present study introduces a brief review of some classical as well as a novel technique that have been used for such purposes. The use of principal component analysis and K-means clustering techniques as important classification algorithms is here discussed. Moreover, a recent method based on the min-W and max-M lattice auto-associative memories, that was proposed for endmember determination in hyperspectral imagery, is introduced as a classification method. Besides a discussion of their mathematical foundation, we emphasize their main characteristics and the results achieved for two exemplar images conformed by objects similar in appearance, but spectrally different. The classification results state that the first components computed from principal component analysis can be used to highlight areas with different spectral characteristics. In addition, the use of lattice auto-associative memories provides good results for materials classification even in the cases where some spectral similarities appears in their spectral responses.

  15. Dynamic extreme learning machine and its approximation capability.

    PubMed

    Zhang, Rui; Lan, Yuan; Huang, Guang-Bin; Xu, Zong-Ben; Soh, Yeng Chai

    2013-12-01

    Extreme learning machines (ELMs) have been proposed for generalized single-hidden-layer feedforward networks which need not be neuron alike and perform well in both regression and classification applications. The problem of determining the suitable network architectures is recognized to be crucial in the successful application of ELMs. This paper first proposes a dynamic ELM (D-ELM) where the hidden nodes can be recruited or deleted dynamically according to their significance to network performance, so that not only the parameters can be adjusted but also the architecture can be self-adapted simultaneously. Then, this paper proves in theory that such D-ELM using Lebesgue p-integrable hidden activation functions can approximate any Lebesgue p-integrable function on a compact input set. Simulation results obtained over various test problems demonstrate and verify that the proposed D-ELM does a good job reducing the network size while preserving good generalization performance.

  16. Peak Detection Method Evaluation for Ion Mobility Spectrometry by Using Machine Learning Approaches

    PubMed Central

    Hauschild, Anne-Christin; Kopczynski, Dominik; D’Addario, Marianna; Baumbach, Jörg Ingo; Rahmann, Sven; Baumbach, Jan

    2013-01-01

    Ion mobility spectrometry with pre-separation by multi-capillary columns (MCC/IMS) has become an established inexpensive, non-invasive bioanalytics technology for detecting volatile organic compounds (VOCs) with various metabolomics applications in medical research. To pave the way for this technology towards daily usage in medical practice, different steps still have to be taken. With respect to modern biomarker research, one of the most important tasks is the automatic classification of patient-specific data sets into different groups, healthy or not, for instance. Although sophisticated machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region-merging with VisualNow, and peak model estimation (PME). We manually generated a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods and systematically study their classification performance based on the four peak detectors’ results. Second, we investigate the classification variance and robustness regarding perturbation and overfitting. Our main finding is that the power of the classification accuracy is almost equally good for all methods, the manually created gold standard as well as the four automatic peak finding methods. In addition, we note that all tools, manual and automatic, are similarly robust against perturbations. However, the classification performance is more robust against overfitting when using the PME as peak calling preprocessor. In summary, we conclude that all methods, though small differences exist, are largely reliable and enable a wide spectrum of real-world biomedical applications. PMID:24957992

  17. Peak detection method evaluation for ion mobility spectrometry by using machine learning approaches.

    PubMed

    Hauschild, Anne-Christin; Kopczynski, Dominik; D'Addario, Marianna; Baumbach, Jörg Ingo; Rahmann, Sven; Baumbach, Jan

    2013-04-16

    Ion mobility spectrometry with pre-separation by multi-capillary columns (MCC/IMS) has become an established inexpensive, non-invasive bioanalytics technology for detecting volatile organic compounds (VOCs) with various metabolomics applications in medical research. To pave the way for this technology towards daily usage in medical practice, different steps still have to be taken. With respect to modern biomarker research, one of the most important tasks is the automatic classification of patient-specific data sets into different groups, healthy or not, for instance. Although sophisticated machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region-merging with VisualNow, and peak model estimation (PME).We manually generated Metabolites 2013, 3 278 a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods and systematically study their classification performance based on the four peak detectors' results. Second, we investigate the classification variance and robustness regarding perturbation and overfitting. Our main finding is that the power of the classification accuracy is almost equally good for all methods, the manually created gold standard as well as the four automatic peak finding methods. In addition, we note that all tools, manual and automatic, are similarly robust against perturbations. However, the classification performance is more robust against overfitting when using the PME as peak calling preprocessor. In summary, we conclude that all methods, though small differences exist, are largely reliable and enable a wide spectrum of real-world biomedical applications.

  18. Lock Wall Expedient Repair Demonstration Monitoring, John T. Myers Locks and Dam, Ohio River

    DTIC Science & Technology

    2011-10-01

    original condition. Complete confinement of the concrete within the armor appears to provide good resistance to impact and abrasion (Figure 29). ERDC... resistance to impact and abrasion . Synopsis General classifications of observed damage were described and, where repairs are considered necessary or...against abrasion , fire , and environmental attacks and to improve the adhesion to other construc- tion materials. For high-weatherproof performance

  19. On Classification in the Study of Failure, and a Challenge to Classifiers

    NASA Technical Reports Server (NTRS)

    Wasson, Kimberly S.

    2003-01-01

    Classification schemes are abundant in the literature of failure. They serve a number of purposes, some more successfully than others. We examine several classification schemes constructed for various purposes relating to failure and its investigation, and discuss their values and limits. The analysis results in a continuum of uses for classification schemes, that suggests that the value of certain properties of these schemes is dependent on the goals a classification is designed to forward. The contrast in the value of different properties for different uses highlights a particular shortcoming: we argue that while humans are good at developing one kind of scheme: dynamic, flexible classifications used for exploratory purposes, we are not so good at developing another: static, rigid classifications used to trap and organize data for specific analytic goals. Our lack of strong foundation in developing valid instantiations of the latter impedes progress toward a number of investigative goals. This shortcoming and its consequences pose a challenge to researchers in the study of failure: to develop new methods for constructing and validating static classification schemes of demonstrable value in promoting the goals of investigations. We note current productive activity in this area, and outline foundations for more.

  20. Comparison between extreme learning machine and wavelet neural networks in data classification

    NASA Astrophysics Data System (ADS)

    Yahia, Siwar; Said, Salwa; Jemai, Olfa; Zaied, Mourad; Ben Amar, Chokri

    2017-03-01

    Extreme learning Machine is a well known learning algorithm in the field of machine learning. It's about a feed forward neural network with a single-hidden layer. It is an extremely fast learning algorithm with good generalization performance. In this paper, we aim to compare the Extreme learning Machine with wavelet neural networks, which is a very used algorithm. We have used six benchmark data sets to evaluate each technique. These datasets Including Wisconsin Breast Cancer, Glass Identification, Ionosphere, Pima Indians Diabetes, Wine Recognition and Iris Plant. Experimental results have shown that both extreme learning machine and wavelet neural networks have reached good results.

  1. Learning relevant features of data with multi-scale tensor networks

    NASA Astrophysics Data System (ADS)

    Miles Stoudenmire, E.

    2018-07-01

    Inspired by coarse-graining approaches used in physics, we show how similar algorithms can be adapted for data. The resulting algorithms are based on layered tree tensor networks and scale linearly with both the dimension of the input and the training set size. Computing most of the layers with an unsupervised algorithm, then optimizing just the top layer for supervised classification of the MNIST and fashion MNIST data sets gives very good results. We also discuss mixing a prior guess for supervised weights together with an unsupervised representation of the data, yielding a smaller number of features nevertheless able to give good performance.

  2. A neural network detection model of spilled oil based on the texture analysis of SAR image

    NASA Astrophysics Data System (ADS)

    An, Jubai; Zhu, Lisong

    2006-01-01

    A Radial Basis Function Neural Network (RBFNN) Model is investigated for the detection of spilled oil based on the texture analysis of SAR imagery. In this paper, to take the advantage of the abundant texture information of SAR imagery, the texture features are extracted by both wavelet transform and the Gray Level Co-occurrence matrix. The RBFNN Model is fed with a vector of these texture features. The RBFNN Model is trained and tested by the sample data set of the feature vectors. Finally, a SAR image is classified by this model. The classification results of a spilled oil SAR image show that the classification accuracy for oil spill is 86.2 by the RBFNN Model using both wavelet texture and gray texture, while the classification accuracy for oil spill is 78.0 by same RBFNN Model using only wavelet texture as the input of this RBFNN model. The model using both wavelet transform and the Gray Level Co-occurrence matrix is more effective than that only using wavelet texture. Furthermore, it keeps the complicated proximity and has a good performance of classification.

  3. Examining the Disability Model From the International Classification of Functioning, Disability, and Health Using a Large Data Set of Community-Dwelling Malaysian Older Adults

    PubMed Central

    Loke, Seng Cheong; Lim, Wee Shiong; Someya, Yoshiko; Hamid, Tengku A.; Nudin, Siti S. H.

    2015-01-01

    Objective: This study examines the International Classification of Functioning, Disability, and Health model (ICF) using a data set of 2,563 community-dwelling elderly with disease-independent measures of mobility, physical activity, and social networking, to represent ICF constructs. Method: The relationship between chronic disease and disability (independent and dependent variables) was examined using logistic regression. To demonstrate variability in activity performance with functional impairment, graphing was used. The relationship between functional impairment, activity performance, and social participation was examined graphically and using ANOVA. The impact of cognitive deficits was quantified through stratifying by dementia. Results: Disability is strongly related to chronic disease (Wald 25.5, p < .001), functional impairment with activity performance (F = 34.2, p < .001), and social participation (F= 43.6, p < .001). With good function, there is considerable variability in activity performance (inter-quartile range [IQR] = 2.00), but diminishes with high impairment (IQR = 0.00) especially with cognitive deficits. Discussion: Environment modification benefits those with moderate functional impairment, but not with higher grades of functional loss. PMID:26472747

  4. Ensemble Classification of Alzheimer's Disease and Mild Cognitive Impairment Based on Complex Graph Measures from Diffusion Tensor Images

    PubMed Central

    Ebadi, Ashkan; Dalboni da Rocha, Josué L.; Nagaraju, Dushyanth B.; Tovar-Moll, Fernanda; Bramati, Ivanei; Coutinho, Gabriel; Sitaram, Ranganatha; Rashidi, Parisa

    2017-01-01

    The human brain is a complex network of interacting regions. The gray matter regions of brain are interconnected by white matter tracts, together forming one integrative complex network. In this article, we report our investigation about the potential of applying brain connectivity patterns as an aid in diagnosing Alzheimer's disease and Mild Cognitive Impairment (MCI). We performed pattern analysis of graph theoretical measures derived from Diffusion Tensor Imaging (DTI) data representing structural brain networks of 45 subjects, consisting of 15 patients of Alzheimer's disease (AD), 15 patients of MCI, and 15 healthy subjects (CT). We considered pair-wise class combinations of subjects, defining three separate classification tasks, i.e., AD-CT, AD-MCI, and CT-MCI, and used an ensemble classification module to perform the classification tasks. Our ensemble framework with feature selection shows a promising performance with classification accuracy of 83.3% for AD vs. MCI, 80% for AD vs. CT, and 70% for MCI vs. CT. Moreover, our findings suggest that AD can be related to graph measures abnormalities at Brodmann areas in the sensorimotor cortex and piriform cortex. In this way, node redundancy coefficient and load centrality in the primary motor cortex were recognized as good indicators of AD in contrast to MCI. In general, load centrality, betweenness centrality, and closeness centrality were found to be the most relevant network measures, as they were the top identified features at different nodes. The present study can be regarded as a “proof of concept” about a procedure for the classification of MRI markers between AD dementia, MCI, and normal old individuals, due to the small and not well-defined groups of AD and MCI patients. Future studies with larger samples of subjects and more sophisticated patient exclusion criteria are necessary toward the development of a more precise technique for clinical diagnosis. PMID:28293162

  5. Application of Machine Learning to Arterial Spin Labeling in Mild Cognitive Impairment and Alzheimer Disease.

    PubMed

    Collij, Lyduine E; Heeman, Fiona; Kuijer, Joost P A; Ossenkoppele, Rik; Benedictus, Marije R; Möller, Christiane; Verfaillie, Sander C J; Sanz-Arigita, Ernesto J; van Berckel, Bart N M; van der Flier, Wiesje M; Scheltens, Philip; Barkhof, Frederik; Wink, Alle Meije

    2016-12-01

    Purpose To investigate whether multivariate pattern recognition analysis of arterial spin labeling (ASL) perfusion maps can be used for classification and single-subject prediction of patients with Alzheimer disease (AD) and mild cognitive impairment (MCI) and subjects with subjective cognitive decline (SCD) after using the W score method to remove confounding effects of sex and age. Materials and Methods Pseudocontinuous 3.0-T ASL images were acquired in 100 patients with probable AD; 60 patients with MCI, of whom 12 remained stable, 12 were converted to a diagnosis of AD, and 36 had no follow-up; 100 subjects with SCD; and 26 healthy control subjects. The AD, MCI, and SCD groups were divided into a sex- and age-matched training set (n = 130) and an independent prediction set (n = 130). Standardized perfusion scores adjusted for age and sex (W scores) were computed per voxel for each participant. Training of a support vector machine classifier was performed with diagnostic status and perfusion maps. Discrimination maps were extracted and used for single-subject classification in the prediction set. Prediction performance was assessed with receiver operating characteristic (ROC) analysis to generate an area under the ROC curve (AUC) and sensitivity and specificity distribution. Results Single-subject diagnosis in the prediction set by using the discrimination maps yielded excellent performance for AD versus SCD (AUC, 0.96; P < .01), good performance for AD versus MCI (AUC, 0.89; P < .01), and poor performance for MCI versus SCD (AUC, 0.63; P = .06). Application of the AD versus SCD discrimination map for prediction of MCI subgroups resulted in good performance for patients with MCI diagnosis converted to AD versus subjects with SCD (AUC, 0.84; P < .01) and fair performance for patients with MCI diagnosis converted to AD versus those with stable MCI (AUC, 0.71; P > .05). Conclusion With automated methods, age- and sex-adjusted ASL perfusion maps can be used to classify and predict diagnosis of AD, conversion of MCI to AD, stable MCI, and SCD with good to excellent accuracy and AUC values. © RSNA, 2016.

  6. Motor Oil Classification using Color Histograms and Pattern Recognition Techniques.

    PubMed

    Ahmadi, Shiva; Mani-Varnosfaderani, Ahmad; Habibi, Biuck

    2018-04-20

    Motor oil classification is important for quality control and the identification of oil adulteration. In thiswork, we propose a simple, rapid, inexpensive and nondestructive approach based on image analysis and pattern recognition techniques for the classification of nine different types of motor oils according to their corresponding color histograms. For this, we applied color histogram in different color spaces such as red green blue (RGB), grayscale, and hue saturation intensity (HSI) in order to extract features that can help with the classification procedure. These color histograms and their combinations were used as input for model development and then were statistically evaluated by using linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and support vector machine (SVM) techniques. Here, two common solutions for solving a multiclass classification problem were applied: (1) transformation to binary classification problem using a one-against-all (OAA) approach and (2) extension from binary classifiers to a single globally optimized multilabel classification model. In the OAA strategy, LDA, QDA, and SVM reached up to 97% in terms of accuracy, sensitivity, and specificity for both the training and test sets. In extension from binary case, despite good performances by the SVM classification model, QDA and LDA provided better results up to 92% for RGB-grayscale-HSI color histograms and up to 93% for the HSI color map, respectively. In order to reduce the numbers of independent variables for modeling, a principle component analysis algorithm was used. Our results suggest that the proposed method is promising for the identification and classification of different types of motor oils.

  7. StandFood: Standardization of Foods Using a Semi-Automatic System for Classifying and Describing Foods According to FoodEx2

    PubMed Central

    Eftimov, Tome; Korošec, Peter; Koroušić Seljak, Barbara

    2017-01-01

    The European Food Safety Authority has developed a standardized food classification and description system called FoodEx2. It uses facets to describe food properties and aspects from various perspectives, making it easier to compare food consumption data from different sources and perform more detailed data analyses. However, both food composition data and food consumption data, which need to be linked, are lacking in FoodEx2 because the process of classification and description has to be manually performed—a process that is laborious and requires good knowledge of the system and also good knowledge of food (composition, processing, marketing, etc.). In this paper, we introduce a semi-automatic system for classifying and describing foods according to FoodEx2, which consists of three parts. The first involves a machine learning approach and classifies foods into four FoodEx2 categories, with two for single foods: raw (r) and derivatives (d), and two for composite foods: simple (s) and aggregated (c). The second uses a natural language processing approach and probability theory to describe foods. The third combines the result from the first and the second part by defining post-processing rules in order to improve the result for the classification part. We tested the system using a set of food items (from Slovenia) manually-coded according to FoodEx2. The new semi-automatic system obtained an accuracy of 89% for the classification part and 79% for the description part, or an overall result of 79% for the whole system. PMID:28587103

  8. Comparing writing style feature-based classification methods for estimating user reputations in social media.

    PubMed

    Suh, Jong Hwan

    2016-01-01

    In recent years, the anonymous nature of the Internet has made it difficult to detect manipulated user reputations in social media, as well as to ensure the qualities of users and their posts. To deal with this, this study designs and examines an automatic approach that adopts writing style features to estimate user reputations in social media. Under varying ways of defining Good and Bad classes of user reputations based on the collected data, it evaluates the classification performance of the state-of-art methods: four writing style features, i.e. lexical, syntactic, structural, and content-specific, and eight classification techniques, i.e. four base learners-C4.5, Neural Network (NN), Support Vector Machine (SVM), and Naïve Bayes (NB)-and four Random Subspace (RS) ensemble methods based on the four base learners. When South Korea's Web forum, Daum Agora, was selected as a test bed, the experimental results show that the configuration of the full feature set containing content-specific features and RS-SVM combining RS and SVM gives the best accuracy for classification if the test bed poster reputations are segmented strictly into Good and Bad classes by portfolio approach. Pairwise t tests on accuracy confirm two expectations coming from the literature reviews: first, the feature set adding content-specific features outperform the others; second, ensemble learning methods are more viable than base learners. Moreover, among the four ways on defining the classes of user reputations, i.e. like, dislike, sum, and portfolio, the results show that the portfolio approach gives the highest accuracy.

  9. Application of artificial neural networks to thermal detection of disbonds

    NASA Technical Reports Server (NTRS)

    Prabhu, D. R.; Howell, P. A.; Syed, H. I.; Winfree, W. P.

    1992-01-01

    A novel technique for processing thermal data is presented and applied to simulation as well as experimental data. Using a neural network of thermal data classification, good classification accuracies are obtained, and the resulting images exhibit very good contrast between bonded and disbonded locations. In order to minimize the preprocessing required before using the network of classification, the temperature values were directly employed to train a network using data from an on-site testing run of a commercial aircraft. Training was extremely fast, and the resulting classification also agreed reasonably well with an ultrasonic characterization of the panel. The results obtained using one sample show the partially disbonded vertical doubler. The vertical lines along the doubler correspond to the original extent of the doubler obtained using blueprints of the aircraft.

  10. Asthma in pregnancy: association between the Asthma Control Test and the Global Initiative for Asthma classification and comparisons with spirometry.

    PubMed

    de Araujo, Georgia Véras; Leite, Débora F B; Rizzo, José A; Sarinho, Emanuel S C

    2016-08-01

    The aim of this study was to identify a possible association between the assessment of clinical asthma control using the Asthma Control Test (ACT) and the Global Initiative for Asthma (GINA) classification and to perform comparisons with values of spirometry. Through this cross-sectional study, 103 pregnant women with asthma were assessed in the period from October 2010 to October 2013 in the asthma pregnancy clinic at the Clinical Hospital of the Federal University of Pernambuco. Questionnaires concerning the level of asthma control were administered using the Global Initiative for Asthma classification, the Asthma Control Test validated for asthmatic expectant mothers and spirometry; all three methods of assessing asthma control were performed during the same visit between the twenty-first and twenty-seventh weeks of pregnancy. There was a significant association between clinical asthma control assessment using the Asthma Control Test and the Global Initiative for Asthma classification (p<0.001). There were also significant associations between the results of the subjective instruments of asthma (the GINA classification and the ACT) and evidence of lung function by spirometry. This study shows that both the Global Initiative for Asthma classification and the Asthma Control Test can be used for asthmatic expectant mothers to assess the clinical control of asthma, especially at the end of the second trimester, which is assumed to be the period of worsening asthma exacerbations during pregnancy. We highlight the importance of the Asthma Control Test as a subjective instrument with easy application, easy interpretation and good reproducibility that does not require spirometry to assess the level of asthma control and can be used in the primary care of asthmatic expectant mothers. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. The use of a projection method to simplify portal and hepatic vein segmentation in liver anatomy.

    PubMed

    Huang, Shaohui; Wang, Boliang; Cheng, Ming; Huang, Xiaoyang; Ju, Ying

    2008-12-01

    In living donor liver transplantation, the volume of the potential graft must be measured to ensure sufficient liver function after surgery. Couinaud divided the liver into 8 functionally independent segments. However, this method is not simple to perform in 3D space directly. Thus, we propose a rapid method to segment the liver based on the hepatic vessel tree. The most important step of this method is vascular projection. By carefully selecting a projection plane, a 3D point can be fixed in the projection plane. This greatly helps in rapid classification. This method was validated by applying it to a 3D liver depicted on CT images, and the result was in good agreement with Couinaud's classification.

  12. 29 CFR 697.2 - Industry wage rates and effective dates.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of goods for commerce, as these terms are defined in section 3 of the Fair Labor Standards Act of... classifications in which such employee is engaged. Industry Minimum wage EffectiveOctober 3, 2005 EffectiveOctober...) Classification A 4.09 4.09 4.09 (2) Classification B 3.92 3.92 3.92 (3) Classification C 3.88 3.88 3.88 (e...

  13. Self Organizing Map-Based Classification of Cathepsin k and S Inhibitors with Different Selectivity Profiles Using Different Structural Molecular Fingerprints: Design and Application for Discovery of Novel Hits.

    PubMed

    Ihmaid, Saleh K; Ahmed, Hany E A; Zayed, Mohamed F; Abadleh, Mohammed M

    2016-01-30

    The main step in a successful drug discovery pipeline is the identification of small potent compounds that selectively bind to the target of interest with high affinity. However, there is still a shortage of efficient and accurate computational methods with powerful capability to study and hence predict compound selectivity properties. In this work, we propose an affordable machine learning method to perform compound selectivity classification and prediction. For this purpose, we have collected compounds with reported activity and built a selectivity database formed of 153 cathepsin K and S inhibitors that are considered of medicinal interest. This database has three compound sets, two K/S and S/K selective ones and one non-selective KS one. We have subjected this database to the selectivity classification tool 'Emergent Self-Organizing Maps' for exploring its capability to differentiate selective cathepsin inhibitors for one target over the other. The method exhibited good clustering performance for selective ligands with high accuracy (up to 100 %). Among the possibilites, BAPs and MACCS molecular structural fingerprints were used for such a classification. The results exhibited the ability of the method for structure-selectivity relationship interpretation and selectivity markers were identified for the design of further novel inhibitors with high activity and target selectivity.

  14. Scene Semantic Segmentation from Indoor Rgb-D Images Using Encode-Decoder Fully Convolutional Networks

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Li, T.; Pan, L.; Kang, Z.

    2017-09-01

    With increasing attention for the indoor environment and the development of low-cost RGB-D sensors, indoor RGB-D images are easily acquired. However, scene semantic segmentation is still an open area, which restricts indoor applications. The depth information can help to distinguish the regions which are difficult to be segmented out from the RGB images with similar color or texture in the indoor scenes. How to utilize the depth information is the key problem of semantic segmentation for RGB-D images. In this paper, we propose an Encode-Decoder Fully Convolutional Networks for RGB-D image classification. We use Multiple Kernel Maximum Mean Discrepancy (MK-MMD) as a distance measure to find common and special features of RGB and D images in the network to enhance performance of classification automatically. To explore better methods of applying MMD, we designed two strategies; the first calculates MMD for each feature map, and the other calculates MMD for whole batch features. Based on the result of classification, we use the full connect CRFs for the semantic segmentation. The experimental results show that our method can achieve a good performance on indoor RGB-D image semantic segmentation.

  15. Feature selection using angle modulated simulated Kalman filter for peak classification of EEG signals.

    PubMed

    Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Mubin, Marizan; Saad, Ismail

    2016-01-01

    In the existing electroencephalogram (EEG) signals peak classification research, the existing models, such as Dumpala, Acir, Liu, and Dingle peak models, employ different set of features. However, all these models may not be able to offer good performance for various applications and it is found to be problem dependent. Therefore, the objective of this study is to combine all the associated features from the existing models before selecting the best combination of features. A new optimization algorithm, namely as angle modulated simulated Kalman filter (AMSKF) will be employed as feature selector. Also, the neural network random weight method is utilized in the proposed AMSKF technique as a classifier. In the conducted experiment, 11,781 samples of peak candidate are employed in this study for the validation purpose. The samples are collected from three different peak event-related EEG signals of 30 healthy subjects; (1) single eye blink, (2) double eye blink, and (3) eye movement signals. The experimental results have shown that the proposed AMSKF feature selector is able to find the best combination of features and performs at par with the existing related studies of epileptic EEG events classification.

  16. 28 CFR 523.30 - What is educational good time sentence credit?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE District of Columbia Educational Good Time Credit § 523.30 What is educational good time sentence credit? Educational good time sentence credit is... 28 Judicial Administration 2 2013-07-01 2013-07-01 false What is educational good time sentence...

  17. 28 CFR 523.30 - What is educational good time sentence credit?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE District of Columbia Educational Good Time Credit § 523.30 What is educational good time sentence credit? Educational good time sentence credit is... 28 Judicial Administration 2 2014-07-01 2014-07-01 false What is educational good time sentence...

  18. 28 CFR 523.13 - Community corrections center good time.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Community corrections center good time...

  19. 28 CFR 523.13 - Community corrections center good time.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Community corrections center good time...

  20. 28 CFR 523.30 - What is educational good time sentence credit?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE District of Columbia Educational Good Time Credit § 523.30 What is educational good time sentence credit? Educational good time sentence credit is... 28 Judicial Administration 2 2012-07-01 2012-07-01 false What is educational good time sentence...

  1. 28 CFR 523.13 - Community corrections center good time.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Community corrections center good time...

  2. 28 CFR 523.13 - Community corrections center good time.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Community corrections center good time...

  3. 28 CFR 523.13 - Community corrections center good time.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.13 Community corrections center good time. Extra good time for an inmate in a Federal or contract Community Corrections... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Community corrections center good time...

  4. Leucocyte classification for leukaemia detection using image processing techniques.

    PubMed

    Putzu, Lorenzo; Caocci, Giovanni; Di Ruberto, Cecilia

    2014-11-01

    The counting and classification of blood cells allow for the evaluation and diagnosis of a vast number of diseases. The analysis of white blood cells (WBCs) allows for the detection of acute lymphoblastic leukaemia (ALL), a blood cancer that can be fatal if left untreated. Currently, the morphological analysis of blood cells is performed manually by skilled operators. However, this method has numerous drawbacks, such as slow analysis, non-standard accuracy, and dependences on the operator's skill. Few examples of automated systems that can analyse and classify blood cells have been reported in the literature, and most of these systems are only partially developed. This paper presents a complete and fully automated method for WBC identification and classification using microscopic images. In contrast to other approaches that identify the nuclei first, which are more prominent than other components, the proposed approach isolates the whole leucocyte and then separates the nucleus and cytoplasm. This approach is necessary to analyse each cell component in detail. From each cell component, different features, such as shape, colour and texture, are extracted using a new approach for background pixel removal. This feature set was used to train different classification models in order to determine which one is most suitable for the detection of leukaemia. Using our method, 245 of 267 total leucocytes were properly identified (92% accuracy) from 33 images taken with the same camera and under the same lighting conditions. Performing this evaluation using different classification models allowed us to establish that the support vector machine with a Gaussian radial basis kernel is the most suitable model for the identification of ALL, with an accuracy of 93% and a sensitivity of 98%. Furthermore, we evaluated the goodness of our new feature set, which displayed better performance with each evaluated classification model. The proposed method permits the analysis of blood cells automatically via image processing techniques, and it represents a medical tool to avoid the numerous drawbacks associated with manual observation. This process could also be used for counting, as it provides excellent performance and allows for early diagnostic suspicion, which can then be confirmed by a haematologist through specialised techniques. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Rule extraction from minimal neural networks for credit card screening.

    PubMed

    Setiono, Rudy; Baesens, Bart; Mues, Christophe

    2011-08-01

    While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.

  6. Identifying and classifying hyperostosis frontalis interna via computerized tomography.

    PubMed

    May, Hila; Peled, Nathan; Dar, Gali; Hay, Ori; Abbas, Janan; Masharawi, Youssef; Hershkovitz, Israel

    2010-12-01

    The aim of this study was to recognize the radiological characteristics of hyperostosis frontalis interna (HFI) and to establish a valid and reliable method for its identification and classification. A reliability test was carried out on 27 individuals who had undergone a head computerized tomography (CT) scan. Intra-observer reliability was obtained by examining the images three times, by the same researcher, with a 2-week interval between each sample ranking. The inter-observer test was performed by three independent researchers. A validity test was carried out using two methods for identifying and classifying HFI: 46 cadaver skullcaps were ranked twice via computerized tomography scans and then by direct observation. Reliability and validity were calculated using Kappa test (SPSS 15.0). Reliability tests of ranking HFI via CT scans demonstrated good results (K > 0.7). As for validity, a very good consensus was obtained between the CT and direct observation, when moderate and advanced types of HFI were present (K = 0.82). The suggested classification method for HFI, using CT, demonstrated a sensitivity of 84%, specificity of 90.5%, and positive predictive value of 91.3%. In conclusion, volume rendering is a reliable and valid tool for identifying HFI. The suggested three-scale classification is most suitable for radiological diagnosis of the phenomena. Considering the increasing awareness of HFI as an early indicator of a developing malady, this study may assist radiologists in identifying and classifying the phenomena.

  7. A Critique of Health System Performance Measurement.

    PubMed

    Lynch, Thomas

    2015-01-01

    Health system performance measurement is a ubiquitous phenomenon. Many authors have identified multiple methodological and substantive problems with performance measurement practices. Despite the validity of these criticisms and their cross-national character, the practice of health system performance measurement persists. Theodore Marmor suggests that performance measurement invokes an "incantatory response" wrapped within "linguistic muddle." In this article, I expand upon Marmor's insights using Pierre Bourdieu's theoretical framework to suggest that, far from an aberration, the "linguistic muddle" identified by Marmor is an indicator of a broad struggle about the representation and classification of public health services as a public good. I present a case study of performance measurement from Alberta, Canada, examining how this representational struggle occurs and what the stakes are. © The Author(s) 2015.

  8. Exploring Deep Learning and Transfer Learning for Colonic Polyp Classification

    PubMed Central

    Uhl, Andreas; Wimmer, Georg; Häfner, Michael

    2016-01-01

    Recently, Deep Learning, especially through Convolutional Neural Networks (CNNs) has been widely used to enable the extraction of highly representative features. This is done among the network layers by filtering, selecting, and using these features in the last fully connected layers for pattern classification. However, CNN training for automated endoscopic image classification still provides a challenge due to the lack of large and publicly available annotated databases. In this work we explore Deep Learning for the automated classification of colonic polyps using different configurations for training CNNs from scratch (or full training) and distinct architectures of pretrained CNNs tested on 8-HD-endoscopic image databases acquired using different modalities. We compare our results with some commonly used features for colonic polyp classification and the good results suggest that features learned by CNNs trained from scratch and the “off-the-shelf” CNNs features can be highly relevant for automated classification of colonic polyps. Moreover, we also show that the combination of classical features and “off-the-shelf” CNNs features can be a good approach to further improve the results. PMID:27847543

  9. Drawing a baseline in aesthetic quality assessment

    NASA Astrophysics Data System (ADS)

    Rubio, Fernando; Flores, M. Julia; Puerta, Jose M.

    2018-04-01

    Aesthetic classification of images is an inherently subjective task. There does not exist a validated collection of images/photographs labeled as having good or bad quality from experts. Nowadays, the closest approximation to that is to use databases of photos where a group of users rate each image. Hence, there is not a unique good/bad label but a rating distribution given by users voting. Due to this peculiarity, it is not possible to state the problem of binary aesthetic supervised classification in such a direct mode as other Computer Vision tasks. Recent literature follows an approach where researchers utilize the average rates from the users for each image, and they establish an arbitrary threshold to determine their class or label. In this way, images above the threshold are considered of good quality, while images below the threshold are seen as bad quality. This paper analyzes current literature, and it reviews those attributes able to represent an image, differentiating into three families: specific, general and deep features. Among those which have been proved more competitive, we have selected a representative subset, being our main goal to establish a clear experimental framework. Finally, once features were selected, we have used them for the full AVA dataset. We have to remark that to perform validation we report not only accuracy values, which is not that informative in this case, but also, metrics able to evaluate classification power within imbalanced datasets. We have conducted a series of experiments so that distinct well-known classifiers are learned from data. Like that, this paper provides what we could consider valuable and valid baseline results for the given problem.

  10. Artificial intelligence in sports on the example of weight training.

    PubMed

    Novatchkov, Hristo; Baca, Arnold

    2013-01-01

    The overall goal of the present study was to illustrate the potential of artificial intelligence (AI) techniques in sports on the example of weight training. The research focused in particular on the implementation of pattern recognition methods for the evaluation of performed exercises on training machines. The data acquisition was carried out using way and cable force sensors attached to various weight machines, thereby enabling the measurement of essential displacement and force determinants during training. On the basis of the gathered data, it was consequently possible to deduce other significant characteristics like time periods or movement velocities. These parameters were applied for the development of intelligent methods adapted from conventional machine learning concepts, allowing an automatic assessment of the exercise technique and providing individuals with appropriate feedback. In practice, the implementation of such techniques could be crucial for the investigation of the quality of the execution, the assistance of athletes but also coaches, the training optimization and for prevention purposes. For the current study, the data was based on measurements from 15 rather inexperienced participants, performing 3-5 sets of 10-12 repetitions on a leg press machine. The initially preprocessed data was used for the extraction of significant features, on which supervised modeling methods were applied. Professional trainers were involved in the assessment and classification processes by analyzing the video recorded executions. The so far obtained modeling results showed good performance and prediction outcomes, indicating the feasibility and potency of AI techniques in assessing performances on weight training equipment automatically and providing sportsmen with prompt advice. Key pointsArtificial intelligence is a promising field for sport-related analysis.Implementations integrating pattern recognition techniques enable the automatic evaluation of data measurements.Artificial neural networks applied for the analysis of weight training data show good performance and high classification rates.

  11. Artificial Intelligence in Sports on the Example of Weight Training

    PubMed Central

    Novatchkov, Hristo; Baca, Arnold

    2013-01-01

    The overall goal of the present study was to illustrate the potential of artificial intelligence (AI) techniques in sports on the example of weight training. The research focused in particular on the implementation of pattern recognition methods for the evaluation of performed exercises on training machines. The data acquisition was carried out using way and cable force sensors attached to various weight machines, thereby enabling the measurement of essential displacement and force determinants during training. On the basis of the gathered data, it was consequently possible to deduce other significant characteristics like time periods or movement velocities. These parameters were applied for the development of intelligent methods adapted from conventional machine learning concepts, allowing an automatic assessment of the exercise technique and providing individuals with appropriate feedback. In practice, the implementation of such techniques could be crucial for the investigation of the quality of the execution, the assistance of athletes but also coaches, the training optimization and for prevention purposes. For the current study, the data was based on measurements from 15 rather inexperienced participants, performing 3-5 sets of 10-12 repetitions on a leg press machine. The initially preprocessed data was used for the extraction of significant features, on which supervised modeling methods were applied. Professional trainers were involved in the assessment and classification processes by analyzing the video recorded executions. The so far obtained modeling results showed good performance and prediction outcomes, indicating the feasibility and potency of AI techniques in assessing performances on weight training equipment automatically and providing sportsmen with prompt advice. Key points Artificial intelligence is a promising field for sport-related analysis. Implementations integrating pattern recognition techniques enable the automatic evaluation of data measurements. Artificial neural networks applied for the analysis of weight training data show good performance and high classification rates. PMID:24149722

  12. 28 CFR 523.12 - Work/study release good time.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Work/study release good time. 523.12..., CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.12 Work/study release good time. Extra good time for an inmate in work or study release programs is awarded automatically, beginning on the...

  13. 28 CFR 523.12 - Work/study release good time.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.12 Work/study release good time. Extra good time for an inmate in work or study release programs is awarded automatically, beginning on the... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Work/study release good time. 523.12...

  14. 28 CFR 523.2 - Good time credit for violators.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.2 Good time credit for violators. (a) An... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Good time credit for violators. 523.2... good time, upon being returned to custody for violation of supervised release, based on the number of...

  15. 28 CFR 523.2 - Good time credit for violators.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.2 Good time credit for violators. (a) An... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Good time credit for violators. 523.2... good time, upon being returned to custody for violation of supervised release, based on the number of...

  16. 28 CFR 523.12 - Work/study release good time.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.12 Work/study release good time. Extra good time for an inmate in work or study release programs is awarded automatically, beginning on the... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Work/study release good time. 523.12...

  17. 28 CFR 523.2 - Good time credit for violators.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.2 Good time credit for violators. (a) An... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Good time credit for violators. 523.2... good time, upon being returned to custody for violation of supervised release, based on the number of...

  18. 28 CFR 523.2 - Good time credit for violators.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.2 Good time credit for violators. (a) An... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Good time credit for violators. 523.2... good time, upon being returned to custody for violation of supervised release, based on the number of...

  19. 28 CFR 523.12 - Work/study release good time.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.12 Work/study release good time. Extra good time for an inmate in work or study release programs is awarded automatically, beginning on the... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Work/study release good time. 523.12...

  20. 28 CFR 523.12 - Work/study release good time.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.12 Work/study release good time. Extra good time for an inmate in work or study release programs is awarded automatically, beginning on the... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Work/study release good time. 523.12...

  1. 28 CFR 523.2 - Good time credit for violators.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.2 Good time credit for violators. (a) An... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Good time credit for violators. 523.2... good time, upon being returned to custody for violation of supervised release, based on the number of...

  2. Semantic Shot Classification in Sports Video

    NASA Astrophysics Data System (ADS)

    Duan, Ling-Yu; Xu, Min; Tian, Qi

    2003-01-01

    In this paper, we present a unified framework for semantic shot classification in sports videos. Unlike previous approaches, which focus on clustering by aggregating shots with similar low-level features, the proposed scheme makes use of domain knowledge of a specific sport to perform a top-down video shot classification, including identification of video shot classes for each sport, and supervised learning and classification of the given sports video with low-level and middle-level features extracted from the sports video. It is observed that for each sport we can predefine a small number of semantic shot classes, about 5~10, which covers 90~95% of sports broadcasting video. With the supervised learning method, we can map the low-level features to middle-level semantic video shot attributes such as dominant object motion (a player), camera motion patterns, and court shape, etc. On the basis of the appropriate fusion of those middle-level shot classes, we classify video shots into the predefined video shot classes, each of which has a clear semantic meaning. The proposed method has been tested over 4 types of sports videos: tennis, basketball, volleyball and soccer. Good classification accuracy of 85~95% has been achieved. With correctly classified sports video shots, further structural and temporal analysis, such as event detection, video skimming, table of content, etc, will be greatly facilitated.

  3. Increasing CAD system efficacy for lung texture analysis using a convolutional network

    NASA Astrophysics Data System (ADS)

    Tarando, Sebastian Roberto; Fetita, Catalin; Faccinetto, Alex; Brillet, Pierre-Yves

    2016-03-01

    The infiltrative lung diseases are a class of irreversible, non-neoplastic lung pathologies requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. For the large majority of CAD systems, such classification relies on a two-dimensional analysis of axial CT images. In a previously developed CAD system, we proposed a fully-3D approach exploiting a multi-scale morphological analysis which showed good performance in detecting diseased areas, but with a major drawback consisting of sometimes overestimating the pathological areas and mixing different type of lung patterns. This paper proposes a combination of the existing CAD system with the classification outcome provided by a convolutional network, specifically tuned-up, in order to increase the specificity of the classification and the confidence to diagnosis. The advantage of using a deep learning approach is a better regularization of the classification output (because of a deeper insight into a given pathological class over a large series of samples) where the previous system is extra-sensitive due to the multi-scale response on patient-specific, localized patterns. In a preliminary evaluation, the combined approach was tested on a 10 patient database of various lung pathologies, showing a sharp increase of true detections.

  4. Research on Remote Sensing Geological Information Extraction Based on Object Oriented Classification

    NASA Astrophysics Data System (ADS)

    Gao, Hui

    2018-04-01

    The northern Tibet belongs to the Sub cold arid climate zone in the plateau. It is rarely visited by people. The geological working conditions are very poor. However, the stratum exposures are good and human interference is very small. Therefore, the research on the automatic classification and extraction of remote sensing geological information has typical significance and good application prospect. Based on the object-oriented classification in Northern Tibet, using the Worldview2 high-resolution remote sensing data, combined with the tectonic information and image enhancement, the lithological spectral features, shape features, spatial locations and topological relations of various geological information are excavated. By setting the threshold, based on the hierarchical classification, eight kinds of geological information were classified and extracted. Compared with the existing geological maps, the accuracy analysis shows that the overall accuracy reached 87.8561 %, indicating that the classification-oriented method is effective and feasible for this study area and provides a new idea for the automatic extraction of remote sensing geological information.

  5. Ensemble analyses improve signatures of tumour hypoxia and reveal inter-platform differences

    PubMed Central

    2014-01-01

    Background The reproducibility of transcriptomic biomarkers across datasets remains poor, limiting clinical application. We and others have suggested that this is in-part caused by differential error-structure between datasets, and their incomplete removal by pre-processing algorithms. Methods To test this hypothesis, we systematically assessed the effects of pre-processing on biomarker classification using 24 different pre-processing methods and 15 distinct signatures of tumour hypoxia in 10 datasets (2,143 patients). Results We confirm strong pre-processing effects for all datasets and signatures, and find that these differ between microarray versions. Importantly, exploiting different pre-processing techniques in an ensemble technique improved classification for a majority of signatures. Conclusions Assessing biomarkers using an ensemble of pre-processing techniques shows clear value across multiple diseases, datasets and biomarkers. Importantly, ensemble classification improves biomarkers with initially good results but does not result in spuriously improved performance for poor biomarkers. While further research is required, this approach has the potential to become a standard for transcriptomic biomarkers. PMID:24902696

  6. A Multiagent-based Intrusion Detection System with the Support of Multi-Class Supervised Classification

    NASA Astrophysics Data System (ADS)

    Shyu, Mei-Ling; Sainani, Varsha

    The increasing number of network security related incidents have made it necessary for the organizations to actively protect their sensitive data with network intrusion detection systems (IDSs). IDSs are expected to analyze a large volume of data while not placing a significantly added load on the monitoring systems and networks. This requires good data mining strategies which take less time and give accurate results. In this study, a novel data mining assisted multiagent-based intrusion detection system (DMAS-IDS) is proposed, particularly with the support of multiclass supervised classification. These agents can detect and take predefined actions against malicious activities, and data mining techniques can help detect them. Our proposed DMAS-IDS shows superior performance compared to central sniffing IDS techniques, and saves network resources compared to other distributed IDS with mobile agents that activate too many sniffers causing bottlenecks in the network. This is one of the major motivations to use a distributed model based on multiagent platform along with a supervised classification technique.

  7. Crowdsourcing as a novel technique for retinal fundus photography classification: analysis of images in the EPIC Norfolk cohort on behalf of the UK Biobank Eye and Vision Consortium.

    PubMed

    Mitry, Danny; Peto, Tunde; Hayat, Shabina; Morgan, James E; Khaw, Kay-Tee; Foster, Paul J

    2013-01-01

    Crowdsourcing is the process of outsourcing numerous tasks to many untrained individuals. Our aim was to assess the performance and repeatability of crowdsourcing for the classification of retinal fundus photography. One hundred retinal fundus photograph images with pre-determined disease criteria were selected by experts from a large cohort study. After reading brief instructions and an example classification, we requested that knowledge workers (KWs) from a crowdsourcing platform classified each image as normal or abnormal with grades of severity. Each image was classified 20 times by different KWs. Four study designs were examined to assess the effect of varying incentive and KW experience in classification accuracy. All study designs were conducted twice to examine repeatability. Performance was assessed by comparing the sensitivity, specificity and area under the receiver operating characteristic curve (AUC). Without restriction on eligible participants, two thousand classifications of 100 images were received in under 24 hours at minimal cost. In trial 1 all study designs had an AUC (95%CI) of 0.701(0.680-0.721) or greater for classification of normal/abnormal. In trial 1, the highest AUC (95%CI) for normal/abnormal classification was 0.757 (0.738-0.776) for KWs with moderate experience. Comparable results were observed in trial 2. In trial 1, between 64-86% of any abnormal image was correctly classified by over half of all KWs. In trial 2, this ranged between 74-97%. Sensitivity was ≥ 96% for normal versus severely abnormal detections across all trials. Sensitivity for normal versus mildly abnormal varied between 61-79% across trials. With minimal training, crowdsourcing represents an accurate, rapid and cost-effective method of retinal image analysis which demonstrates good repeatability. Larger studies with more comprehensive participant training are needed to explore the utility of this compelling technique in large scale medical image analysis.

  8. 49 CFR 369.3 - Classification of carriers-motor carriers of passengers.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Producer Price Index of Finished Goods and is used to eliminate the effects of inflation from the classification process. Note: Each carrier's operating revenues will be deflated annually using the Producers...

  9. 49 CFR 369.3 - Classification of carriers-motor carriers of passengers.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Producer Price Index of Finished Goods and is used to eliminate the effects of inflation from the classification process. Note: Each carrier's operating revenues will be deflated annually using the Producers...

  10. The Classification of Diabetes Mellitus Using Kernel k-means

    NASA Astrophysics Data System (ADS)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  11. Automatic bad channel detection in intracranial electroencephalographic recordings using ensemble machine learning.

    PubMed

    Tuyisenge, Viateur; Trebaul, Lena; Bhattacharjee, Manik; Chanteloup-Forêt, Blandine; Saubat-Guigui, Carole; Mîndruţă, Ioana; Rheims, Sylvain; Maillard, Louis; Kahane, Philippe; Taussig, Delphine; David, Olivier

    2018-03-01

    Intracranial electroencephalographic (iEEG) recordings contain "bad channels", which show non-neuronal signals. Here, we developed a new method that automatically detects iEEG bad channels using machine learning of seven signal features. The features quantified signals' variance, spatial-temporal correlation and nonlinear properties. Because the number of bad channels is usually much lower than the number of good channels, we implemented an ensemble bagging classifier known to be optimal in terms of stability and predictive accuracy for datasets with imbalanced class distributions. This method was applied on stereo-electroencephalographic (SEEG) signals recording during low frequency stimulations performed in 206 patients from 5 clinical centers. We found that the classification accuracy was extremely good: It increased with the number of subjects used to train the classifier and reached a plateau at 99.77% for 110 subjects. The classification performance was thus not impacted by the multicentric nature of data. The proposed method to automatically detect bad channels demonstrated convincing results and can be envisaged to be used on larger datasets for automatic quality control of iEEG data. This is the first method proposed to classify bad channels in iEEG and should allow to improve the data selection when reviewing iEEG signals. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  12. Data mining of tree-based models to analyze freeway accident frequency.

    PubMed

    Chang, Li-Yen; Chen, Wen-Chieh

    2005-01-01

    Statistical models, such as Poisson or negative binomial regression models, have been employed to analyze vehicle accident frequency for many years. However, these models have their own model assumptions and pre-defined underlying relationship between dependent and independent variables. If these assumptions are violated, the model could lead to erroneous estimation of accident likelihood. Classification and Regression Tree (CART), one of the most widely applied data mining techniques, has been commonly employed in business administration, industry, and engineering. CART does not require any pre-defined underlying relationship between target (dependent) variable and predictors (independent variables) and has been shown to be a powerful tool, particularly for dealing with prediction and classification problems. This study collected the 2001-2002 accident data of National Freeway 1 in Taiwan. A CART model and a negative binomial regression model were developed to establish the empirical relationship between traffic accidents and highway geometric variables, traffic characteristics, and environmental factors. The CART findings indicated that the average daily traffic volume and precipitation variables were the key determinants for freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies.

  13. Techniques of EMG signal analysis: detection, processing, classification and applications

    PubMed Central

    Hussain, M.S.; Mohd-Yasin, F.

    2006-01-01

    Electromyography (EMG) signals can be used for clinical/biomedical applications, Evolvable Hardware Chip (EHW) development, and modern human computer interaction. EMG signals acquired from muscles require advanced methods for detection, decomposition, processing, and classification. The purpose of this paper is to illustrate the various methodologies and algorithms for EMG signal analysis to provide efficient and effective ways of understanding the signal and its nature. We further point up some of the hardware implementations using EMG focusing on applications related to prosthetic hand control, grasp recognition, and human computer interaction. A comparison study is also given to show performance of various EMG signal analysis methods. This paper provides researchers a good understanding of EMG signal and its analysis procedures. This knowledge will help them develop more powerful, flexible, and efficient applications. PMID:16799694

  14. [Research on the methods for multi-class kernel CSP-based feature extraction].

    PubMed

    Wang, Jinjia; Zhang, Lingzhi; Hu, Bei

    2012-04-01

    To relax the presumption of strictly linear patterns in the common spatial patterns (CSP), we studied the kernel CSP (KCSP). A new multi-class KCSP (MKCSP) approach was proposed in this paper, which combines the kernel approach with multi-class CSP technique. In this approach, we used kernel spatial patterns for each class against all others, and extracted signal components specific to one condition from EEG data sets of multiple conditions. Then we performed classification using the Logistic linear classifier. Brain computer interface (BCI) competition III_3a was used in the experiment. Through the experiment, it can be proved that this approach could decompose the raw EEG singles into spatial patterns extracted from multi-class of single trial EEG, and could obtain good classification results.

  15. 37 CFR 6.2 - Prior U.S. schedule of classes of goods and services.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... classes of goods and services. 6.2 Section 6.2 Patents, Trademarks, and Copyrights UNITED STATES PATENT AND TRADEMARK OFFICE, DEPARTMENT OF COMMERCE CLASSIFICATION OF GOODS AND SERVICES UNDER THE TRADEMARK ACT § 6.2 Prior U.S. schedule of classes of goods and services. Class Title Goods 1 Raw or partly...

  16. 28 CFR 523.15 - Camp or farm good time.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.15 Camp or farm good time. An inmate assigned to a farm or camp is automatically awarded extra good time, beginning on the date of commitment to... 28 Judicial Administration 2 2014-07-01 2014-07-01 false Camp or farm good time. 523.15 Section...

  17. 28 CFR 523.15 - Camp or farm good time.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.15 Camp or farm good time. An inmate assigned to a farm or camp is automatically awarded extra good time, beginning on the date of commitment to... 28 Judicial Administration 2 2013-07-01 2013-07-01 false Camp or farm good time. 523.15 Section...

  18. 28 CFR 523.15 - Camp or farm good time.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.15 Camp or farm good time. An inmate assigned to a farm or camp is automatically awarded extra good time, beginning on the date of commitment to... 28 Judicial Administration 2 2012-07-01 2012-07-01 false Camp or farm good time. 523.15 Section...

  19. 28 CFR 523.15 - Camp or farm good time.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.15 Camp or farm good time. An inmate assigned to a farm or camp is automatically awarded extra good time, beginning on the date of commitment to... 28 Judicial Administration 2 2010-07-01 2010-07-01 false Camp or farm good time. 523.15 Section...

  20. 28 CFR 523.15 - Camp or farm good time.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Extra Good Time § 523.15 Camp or farm good time. An inmate assigned to a farm or camp is automatically awarded extra good time, beginning on the date of commitment to... 28 Judicial Administration 2 2011-07-01 2011-07-01 false Camp or farm good time. 523.15 Section...

  1. Vehicle detection in aerial surveillance using dynamic Bayesian networks.

    PubMed

    Cheng, Hsu-Yung; Weng, Chih-Chia; Chen, Yi-Ying

    2012-04-01

    We present an automatic vehicle detection system for aerial surveillance in this paper. In this system, we escape from the stereotype and existing frameworks of vehicle detection in aerial surveillance, which are either region based or sliding window based. We design a pixelwise classification method for vehicle detection. The novelty lies in the fact that, in spite of performing pixelwise classification, relations among neighboring pixels in a region are preserved in the feature extraction process. We consider features including vehicle colors and local features. For vehicle color extraction, we utilize a color transform to separate vehicle colors and nonvehicle colors effectively. For edge detection, we apply moment preserving to adjust the thresholds of the Canny edge detector automatically, which increases the adaptability and the accuracy for detection in various aerial images. Afterward, a dynamic Bayesian network (DBN) is constructed for the classification purpose. We convert regional local features into quantitative observations that can be referenced when applying pixelwise classification via DBN. Experiments were conducted on a wide variety of aerial videos. The results demonstrate flexibility and good generalization abilities of the proposed method on a challenging data set with aerial surveillance images taken at different heights and under different camera angles.

  2. Semantic Segmentation of Convolutional Neural Network for Supervised Classification of Multispectral Remote Sensing

    NASA Astrophysics Data System (ADS)

    Xue, L.; Liu, C.; Wu, Y.; Li, H.

    2018-04-01

    Semantic segmentation is a fundamental research in remote sensing image processing. Because of the complex maritime environment, the classification of roads, vegetation, buildings and water from remote Sensing Imagery is a challenging task. Although the neural network has achieved excellent performance in semantic segmentation in the last years, there are a few of works using CNN for ground object segmentation and the results could be further improved. This paper used convolution neural network named U-Net, its structure has a contracting path and an expansive path to get high resolution output. In the network , We added BN layers, which is more conducive to the reverse pass. Moreover, after upsampling convolution , we add dropout layers to prevent overfitting. They are promoted to get more precise segmentation results. To verify this network architecture, we used a Kaggle dataset. Experimental results show that U-Net achieved good performance compared with other architectures, especially in high-resolution remote sensing imagery.

  3. HMM for hyperspectral spectrum representation and classification with endmember entropy vectors

    NASA Astrophysics Data System (ADS)

    Arabi, Samir Y. W.; Fernandes, David; Pizarro, Marco A.

    2015-10-01

    The Hyperspectral images due to its good spectral resolution are extensively used for classification, but its high number of bands requires a higher bandwidth in the transmission data, a higher data storage capability and a higher computational capability in processing systems. This work presents a new methodology for hyperspectral data classification that can work with a reduced number of spectral bands and achieve good results, comparable with processing methods that require all hyperspectral bands. The proposed method for hyperspectral spectra classification is based on the Hidden Markov Model (HMM) associated to each Endmember (EM) of a scene and the conditional probabilities of each EM belongs to each other EM. The EM conditional probability is transformed in EM vector entropy and those vectors are used as reference vectors for the classes in the scene. The conditional probability of a spectrum that will be classified is also transformed in a spectrum entropy vector, which is classified in a given class by the minimum ED (Euclidian Distance) among it and the EM entropy vectors. The methodology was tested with good results using AVIRIS spectra of a scene with 13 EM considering the full 209 bands and the reduced spectral bands of 128, 64 and 32. For the test area its show that can be used only 32 spectral bands instead of the original 209 bands, without significant loss in the classification process.

  4. Change Detection Algorithms for Surveillance in Visual IoT: A Comparative Study

    NASA Astrophysics Data System (ADS)

    Akram, Beenish Ayesha; Zafar, Amna; Akbar, Ali Hammad; Wajid, Bilal; Chaudhry, Shafique Ahmad

    2018-01-01

    The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule's Coefficient) and JC (Jaccard's Coefficient), execution time and memory consumption. Experimental results showed that Kapur's algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes.

  5. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2015-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  6. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2016-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  7. Classification via Clustering for Predicting Final Marks Based on Student Participation in Forums

    ERIC Educational Resources Information Center

    Lopez, M. I.; Luna, J. M.; Romero, C.; Ventura, S.

    2012-01-01

    This paper proposes a classification via clustering approach to predict the final marks in a university course on the basis of forum data. The objective is twofold: to determine if student participation in the course forum can be a good predictor of the final marks for the course and to examine whether the proposed classification via clustering…

  8. [Research of electroencephalography representational emotion recognition based on deep belief networks].

    PubMed

    Yang, Hao; Zhang, Junran; Jiang, Xiaomei; Liu, Fei

    2018-04-01

    In recent years, with the rapid development of machine learning techniques,the deep learning algorithm has been widely used in one-dimensional physiological signal processing. In this paper we used electroencephalography (EEG) signals based on deep belief network (DBN) model in open source frameworks of deep learning to identify emotional state (positive, negative and neutrals), then the results of DBN were compared with support vector machine (SVM). The EEG signals were collected from the subjects who were under different emotional stimuli, and DBN and SVM were adopted to identify the EEG signals with changes of different characteristics and different frequency bands. We found that the average accuracy of differential entropy (DE) feature by DBN is 89.12%±6.54%, which has a better performance than previous research based on the same data set. At the same time, the classification effects of DBN are better than the results from traditional SVM (the average classification accuracy of 84.2%±9.24%) and its accuracy and stability have a better trend. In three experiments with different time points, single subject can achieve the consistent results of classification by using DBN (the mean standard deviation is1.44%), and the experimental results show that the system has steady performance and good repeatability. According to our research, the characteristic of DE has a better classification result than other characteristics. Furthermore, the Beta band and the Gamma band in the emotional recognition model have higher classification accuracy. To sum up, the performances of classifiers have a promotion by using the deep learning algorithm, which has a reference for establishing a more accurate system of emotional recognition. Meanwhile, we can trace through the results of recognition to find out the brain regions and frequency band that are related to the emotions, which can help us to understand the emotional mechanism better. This study has a high academic value and practical significance, so further investigation still needs to be done.

  9. Good IR duals of bad quiver theories

    NASA Astrophysics Data System (ADS)

    Dey, Anindya; Koroteev, Peter

    2018-05-01

    The infrared dynamics of generic 3d N = 4 bad theories (as per the good-bad-ugly classification of Gaiotto and Witten) are poorly understood. Examples of such theories with a single unitary gauge group and fundamental flavors have been studied recently, and the low energy effective theory around some special point in the Coulomb branch was shown to have a description in terms of a good theory and a certain number of free hypermultiplets. A classification of possible infrared fixed points for bad theories by Bashkirov, based on unitarity constraints and superconformal symmetry, suggest a much richer set of possibilities for the IR behavior, although explicit examples were not known. In this note, we present a specific example of a bad quiver gauge theory which admits a good IR description on a sublocus of its Coulomb branch. The good description, in question, consists of two decoupled quiver gauge theories with no free hypermultiplets.

  10. Continuous vehicle classification data : how good is it?

    DOT National Transportation Integrated Search

    2000-08-01

    Florida has a lengthy history of trying to obtain continuous vehicle classification data. They installed their first piezoelectric axle sensors at a continuous count site in October of 1988. At that time, they had 86 continuous count sites operating ...

  11. Survey Definitions of Gout for Epidemiologic Studies: Comparison With Crystal Identification as the Gold Standard.

    PubMed

    Dalbeth, Nicola; Schumacher, H Ralph; Fransen, Jaap; Neogi, Tuhina; Jansen, Tim L; Brown, Melanie; Louthrenoo, Worawit; Vazquez-Mellado, Janitzia; Eliseev, Maxim; McCarthy, Geraldine; Stamp, Lisa K; Perez-Ruiz, Fernando; Sivera, Francisca; Ea, Hang-Korng; Gerritsen, Martijn; Scire, Carlo A; Cavagna, Lorenzo; Lin, Chingtsai; Chou, Yin-Yi; Tausche, Anne-Kathrin; da Rocha Castelar-Pinheiro, Geraldo; Janssen, Matthijs; Chen, Jiunn-Horng; Cimmino, Marco A; Uhlig, Till; Taylor, William J

    2016-12-01

    To identify the best-performing survey definition of gout from items commonly available in epidemiologic studies. Survey definitions of gout were identified from 34 epidemiologic studies contributing to the Global Urate Genetics Consortium (GUGC) genome-wide association study. Data from the Study for Updated Gout Classification Criteria (SUGAR) were randomly divided into development and test data sets. A data-driven case definition was formed using logistic regression in the development data set. This definition, along with definitions used in GUGC studies and the 2015 American College of Rheumatology (ACR)/European League Against Rheumatism (EULAR) gout classification criteria were applied to the test data set, using monosodium urate crystal identification as the gold standard. For all tested GUGC definitions, the simple definition of "self-report of gout or urate-lowering therapy use" had the best test performance characteristics (sensitivity 82%, specificity 72%). The simple definition had similar performance to a SUGAR data-driven case definition with 5 weighted items: self-report, self-report of doctor diagnosis, colchicine use, urate-lowering therapy use, and hyperuricemia (sensitivity 87%, specificity 70%). Both of these definitions performed better than the 1977 American Rheumatism Association survey criteria (sensitivity 82%, specificity 67%). Of all tested definitions, the 2015 ACR/EULAR criteria had the best performance (sensitivity 92%, specificity 89%). A simple definition of "self-report of gout or urate-lowering therapy use" has the best test performance characteristics of existing definitions that use routinely available data. A more complex combination of features is more sensitive, but still lacks good specificity. If a more accurate case definition is required for a particular study, the 2015 ACR/EULAR gout classification criteria should be considered. © 2016, American College of Rheumatology.

  12. Possible world based consistency learning model for clustering and classifying uncertain data.

    PubMed

    Liu, Han; Zhang, Xianchao; Zhang, Xiaotong

    2018-06-01

    Possible world has shown to be effective for handling various types of data uncertainty in uncertain data management. However, few uncertain data clustering and classification algorithms are proposed based on possible world. Moreover, existing possible world based algorithms suffer from the following issues: (1) they deal with each possible world independently and ignore the consistency principle across different possible worlds; (2) they require the extra post-processing procedure to obtain the final result, which causes that the effectiveness highly relies on the post-processing method and the efficiency is also not very good. In this paper, we propose a novel possible world based consistency learning model for uncertain data, which can be extended both for clustering and classifying uncertain data. This model utilizes the consistency principle to learn a consensus affinity matrix for uncertain data, which can make full use of the information across different possible worlds and then improve the clustering and classification performance. Meanwhile, this model imposes a new rank constraint on the Laplacian matrix of the consensus affinity matrix, thereby ensuring that the number of connected components in the consensus affinity matrix is exactly equal to the number of classes. This also means that the clustering and classification results can be directly obtained without any post-processing procedure. Furthermore, for the clustering and classification tasks, we respectively derive the efficient optimization methods to solve the proposed model. Experimental results on real benchmark datasets and real world uncertain datasets show that the proposed model outperforms the state-of-the-art uncertain data clustering and classification algorithms in effectiveness and performs competitively in efficiency. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Measuring health system responsiveness at facility level in Ethiopia: performance, correlates and implications.

    PubMed

    Yakob, Bereket; Ncama, Busisiwe Purity

    2017-04-11

    Health system responsiveness measures (HSR) the non-health aspect of care relating to the environment and the way healthcare is provided to clients. The study measured the HSR performance and correlates of HIV/AIDS treatment and care services in the Wolaita Zone of Ethiopia. A cross-sectional survey across seven responsiveness domains (attention, autonomy, amenities of care, choice, communication, confidentiality and respect) was conducted on 492 people using pre-ART and ART care. The Likert scale categories were allocated percentages for analysis, being classified as unacceptable (Fail) and acceptable (Good and Very Good) performance. Of the 452 (91.9%) participants, 205 (45.4%) and 247 (54.6%) were from health centers and a hospital respectively. 375 (83.0%) and 77 (17.0%) were on ART and pre-ART care respectively. A range of response classifications was reported for each domain, with Fail performance being higher for choice (48.4%), attention (45.5%) and autonomy (22.7%) domains. Communication (64.2%), amenities (61.4%), attention (51.4%) and confidentiality (50.1%) domains had higher scores in the 'Good' performance category. On the other hand, 'only respect (54.0%) domain had higher score in the 'Very Good' performance category while attention (3.1%), amenities (4.7%) and choice (12.4%) domains had very low scores. Respect (5.1%), confidentiality (7.6%) and communication (14.7%) showed low proportion in the Fail performance. 10.4 and 6.9% of the responsiveness percent score (RPS) were in 'Fail' and Very Good categories respectively while the rest (82.7%) were in Good performance category. In the multivariate analysis, a unit increase in the perceived quality of care, satisfaction with the services and financial fairness scores respectively resulted in 0.27% (p < 0.001), 0.48% (p < 0.001) and 0.48% (p < 0.001) increase in the RPS. On the contrary, visiting traditional medicine practitioner before formal HIV care was associated with 2.1% decrease in the RPS. The health facilities performed low on the autonomy, choice, attention and amenities domains while the overall RPS masked the weaknesses and strengths and showed an overall good performance. The domain specific responsiveness scores are better ways of measuring responsiveness. Improving quality of care, client satisfaction and financial fairness will be important interventions to improve responsiveness performance.

  14. Deep Recurrent Neural Networks for Supernovae Classification

    NASA Astrophysics Data System (ADS)

    Charnock, Tom; Moss, Adam

    2017-03-01

    We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae (code available at https://github.com/adammoss/supernovae). The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic, additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC data set (around 104 supernovae) we obtain a type-Ia versus non-type-Ia classification accuracy of 94.7%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and an SPCC figure-of-merit F 1 = 0.64. When using only the data for the early-epoch challenge defined by the SPCC, we achieve a classification accuracy of 93.1%, AUC of 0.977, and F 1 = 0.58, results almost as good as with the whole light curve. By employing bidirectional neural networks, we can acquire impressive classification results between supernovae types I, II and III at an accuracy of 90.4% and AUC of 0.974. We also apply a pre-trained model to obtain classification probabilities as a function of time and show that it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.

  15. Optical diagnosis of cervical cancer by intrinsic mode functions

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sabyasachi; Pratiher, Sawon; Pratiher, Souvik; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.

    2017-03-01

    In this paper, we make use of the empirical mode decomposition (EMD) to discriminate the cervical cancer tissues from normal ones based on elastic scattering spectroscopy. The phase space has been reconstructed through decomposing the optical signal into a finite set of bandlimited signals known as intrinsic mode functions (IMFs). It has been shown that the area measure of the analytic IMFs provides a good discrimination performance. Simulation results validate the efficacy of the IMFs followed by SVM based classification.

  16. ANALYTiC: An Active Learning System for Trajectory Classification.

    PubMed

    Soares Junior, Amilcar; Renso, Chiara; Matwin, Stan

    2017-01-01

    The increasing availability and use of positioning devices has resulted in large volumes of trajectory data. However, semantic annotations for such data are typically added by domain experts, which is a time-consuming task. Machine-learning algorithms can help infer semantic annotations from trajectory data by learning from sets of labeled data. Specifically, active learning approaches can minimize the set of trajectories to be annotated while preserving good performance measures. The ANALYTiC web-based interactive tool visually guides users through this annotation process.

  17. An artificial intelligence approach to classify and analyse EEG traces.

    PubMed

    Castellaro, C; Favaro, G; Castellaro, A; Casagrande, A; Castellaro, S; Puthenparampil, D V; Salimbeni, C Fattorello

    2002-06-01

    We present a fully automatic system for the classification and analysis of adult electroencephalograms (EEGs). The system is based on an artificial neural network which classifies the single epochs of trace, and on an Expert System (ES) which studies the time and space correlation among the outputs of the neural network; compiling a final report. On the last 2000 EEGs representing different kinds of alterations according to clinical occurrences, the system was able to produce 80% good or very good final comments and 18% sufficient comments, which represent the documents delivered to the patient. In the remaining 2% the automatic comment needed some modifications prior to be presented to the patient. No clinical false-negative classifications did arise, i.e. no altered traces were classified as 'normal' by the neural network. The analysis method we describe is based on the interpretation of objective measures performed on the trace. It can improve the quality and reliability of the EEG exam and appears useful for the EEG medical reports although it cannot totally substitute the medical doctor who should now read the automatic EEG analysis in light of the patient's history and age.

  18. Hierarchical classification strategy for Phenotype extraction from epidermal growth factor receptor endocytosis screening.

    PubMed

    Cao, Lu; Graauw, Marjo de; Yan, Kuan; Winkel, Leah; Verbeek, Fons J

    2016-05-03

    Endocytosis is regarded as a mechanism of attenuating the epidermal growth factor receptor (EGFR) signaling and of receptor degradation. There is increasing evidence becoming available showing that breast cancer progression is associated with a defect in EGFR endocytosis. In order to find related Ribonucleic acid (RNA) regulators in this process, high-throughput imaging with fluorescent markers is used to visualize the complex EGFR endocytosis process. Subsequently a dedicated automatic image and data analysis system is developed and applied to extract the phenotype measurement and distinguish different developmental episodes from a huge amount of images acquired through high-throughput imaging. For the image analysis, a phenotype measurement quantifies the important image information into distinct features or measurements. Therefore, the manner in which prominent measurements are chosen to represent the dynamics of the EGFR process becomes a crucial step for the identification of the phenotype. In the subsequent data analysis, classification is used to categorize each observation by making use of all prominent measurements obtained from image analysis. Therefore, a better construction for a classification strategy will support to raise the performance level in our image and data analysis system. In this paper, we illustrate an integrated analysis method for EGFR signalling through image analysis of microscopy images. Sophisticated wavelet-based texture measurements are used to obtain a good description of the characteristic stages in the EGFR signalling. A hierarchical classification strategy is designed to improve the recognition of phenotypic episodes of EGFR during endocytosis. Different strategies for normalization, feature selection and classification are evaluated. The results of performance assessment clearly demonstrate that our hierarchical classification scheme combined with a selected set of features provides a notable improvement in the temporal analysis of EGFR endocytosis. Moreover, it is shown that the addition of the wavelet-based texture features contributes to this improvement. Our workflow can be applied to drug discovery to analyze defected EGFR endocytosis processes.

  19. Sub-classification of Advanced-Stage Hepatocellular Carcinoma: A Cohort Study Including 612 Patients Treated with Sorafenib.

    PubMed

    Yoo, Jeong-Ju; Chung, Goh Eun; Lee, Jeong-Hoon; Nam, Joon Yeul; Chang, Young; Lee, Jeong Min; Lee, Dong Ho; Kim, Hwi Young; Cho, Eun Ju; Yu, Su Jong; Kim, Yoon Jun; Yoon, Jung-Hwan

    2018-04-01

    Advanced hepatocellular carcinoma (HCC) is associated with various clinical conditions including major vessel invasion, metastasis, and poor performance status. The aim of this study was to establish a prognostic scoring system and to propose a sub-classification of the Barcelona-Clinic Liver Cancer (BCLC) stage C. This retrospective study included consecutive patientswho received sorafenib for BCLC stage C HCC at a single tertiary hospital in Korea. A Cox proportional hazard model was used to develop a scoring system, and internal validationwas performed by a 5-fold cross-validation. The performance of the model in predicting risk was assessed by the area under the curve and the Hosmer-Lemeshow test. A total of 612 BCLC stage C HCC patients were sub- classified into strata depending on their performance status. Five independent prognostic factors (Child-Pugh score, α-fetoprotein, tumor type, extrahepatic metastasis, and portal vein invasion) were identified and used in the prognostic scoring system. This scoring system showed good discrimination (area under the receiver operating characteristic curve, 0.734 to 0.818) and calibration functions (both p < 0.05 by the Hosmer-Lemeshow test at 1 month and 12 months, respectively). The differences in survival among the different risk groups classified by the total score were significant (p < 0.001 by the log-rank test in both the Eastern Cooperative Oncology Group 0 and 1 strata). The heterogeneity of patientswith BCLC stage C HCC requires sub-classification of advanced HCC. A prognostic scoring system with five independent factors is useful in predicting the survival of patients with BCLC stage C HCC.

  20. Research on bearing fault diagnosis of large machinery based on mathematical morphology

    NASA Astrophysics Data System (ADS)

    Wang, Yu

    2018-04-01

    To study the automatic diagnosis of large machinery fault based on support vector machine, combining the four common faults of the large machinery, the support vector machine is used to classify and identify the fault. The extracted feature vectors are entered. The feature vector is trained and identified by multi - classification method. The optimal parameters of the support vector machine are searched by trial and error method and cross validation method. Then, the support vector machine is compared with BP neural network. The results show that the support vector machines are short in time and high in classification accuracy. It is more suitable for the research of fault diagnosis in large machinery. Therefore, it can be concluded that the training speed of support vector machines (SVM) is fast and the performance is good.

  1. Image classification at low light levels

    NASA Astrophysics Data System (ADS)

    Wernick, Miles N.; Morris, G. Michael

    1986-12-01

    An imaging photon-counting detector is used to achieve automatic sorting of two image classes. The classification decision is formed on the basis of the cross correlation between a photon-limited input image and a reference function stored in computer memory. Expressions for the statistical parameters of the low-light-level correlation signal are given and are verified experimentally. To obtain a correlation-based system for two-class sorting, it is necessary to construct a reference function that produces useful information for class discrimination. An expression for such a reference function is derived using maximum-likelihood decision theory. Theoretically predicted results are used to compare on the basis of performance the maximum-likelihood reference function with Fukunaga-Koontz basis vectors and average filters. For each method, good class discrimination is found to result in milliseconds from a sparse sampling of the input image.

  2. P300 Chinese input system based on Bayesian LDA.

    PubMed

    Jin, Jing; Allison, Brendan Z; Brunner, Clemens; Wang, Bei; Wang, Xingyu; Zhang, Jianhua; Neuper, Christa; Pfurtscheller, Gert

    2010-02-01

    A brain-computer interface (BCI) is a new communication channel between humans and computers that translates brain activity into recognizable command and control signals. Attended events can evoke P300 potentials in the electroencephalogram. Hence, the P300 has been used in BCI systems to spell, control cursors or robotic devices, and other tasks. This paper introduces a novel P300 BCI to communicate Chinese characters. To improve classification accuracy, an optimization algorithm (particle swarm optimization, PSO) is used for channel selection (i.e., identifying the best electrode configuration). The effects of different electrode configurations on classification accuracy were tested by Bayesian linear discriminant analysis offline. The offline results from 11 subjects show that this new P300 BCI can effectively communicate Chinese characters and that the features extracted from the electrodes obtained by PSO yield good performance.

  3. Automatic tissue characterization from ultrasound imagery

    NASA Astrophysics Data System (ADS)

    Kadah, Yasser M.; Farag, Aly A.; Youssef, Abou-Bakr M.; Badawi, Ahmed M.

    1993-08-01

    In this work, feature extraction algorithms are proposed to extract the tissue characterization parameters from liver images. Then the resulting parameter set is further processed to obtain the minimum number of parameters representing the most discriminating pattern space for classification. This preprocessing step was applied to over 120 pathology-investigated cases to obtain the learning data for designing the classifier. The extracted features are divided into independent training and test sets and are used to construct both statistical and neural classifiers. The optimal criteria for these classifiers are set to have minimum error, ease of implementation and learning, and the flexibility for future modifications. Various algorithms for implementing various classification techniques are presented and tested on the data. The best performance was obtained using a single layer tensor model functional link network. Also, the voting k-nearest neighbor classifier provided comparably good diagnostic rates.

  4. Classification Studies in an Advanced Air Classifier

    NASA Astrophysics Data System (ADS)

    Routray, Sunita; Bhima Rao, R.

    2016-10-01

    In the present paper, experiments are carried out using VSK separator which is an advanced air classifier to recover heavy minerals from beach sand. In classification experiments the cage wheel speed and the feed rate are set and the material is fed to the air cyclone and split into fine and coarse particles which are collected in separate bags. The size distribution of each fraction was measured by sieve analysis. A model is developed to predict the performance of the air classifier. The objective of the present model is to predict the grade efficiency curve for a given set of operating parameters such as cage wheel speed and feed rate. The overall experimental data with all variables studied in this investigation is fitted to several models. It is found that the present model is fitting good to the logistic model.

  5. A new texture descriptor based on local micro-pattern for detection of architectural distortion in mammographic images

    NASA Astrophysics Data System (ADS)

    de Oliveira, Helder C. R.; Moraes, Diego R.; Reche, Gustavo A.; Borges, Lucas R.; Catani, Juliana H.; de Barros, Nestor; Melo, Carlos F. E.; Gonzaga, Adilson; Vieira, Marcelo A. C.

    2017-03-01

    This paper presents a new local micro-pattern texture descriptor for the detection of Architectural Distortion (AD) in digital mammography images. AD is a subtle contraction of breast parenchyma that may represent an early sign of breast cancer. Due to its subtlety and variability, AD is more difficult to detect compared to microcalcifications and masses, and is commonly found in retrospective evaluations of false-negative mammograms. Several computer-based systems have been proposed for automatic detection of AD, but their performance are still unsatisfactory. The proposed descriptor, Local Mapped Pattern (LMP), is a generalization of the Local Binary Pattern (LBP), which is considered one of the most powerful feature descriptor for texture classification in digital images. Compared to LBP, the LMP descriptor captures more effectively the minor differences between the local image pixels. Moreover, LMP is a parametric model which can be optimized for the desired application. In our work, the LMP performance was compared to the LBP and four Haralick's texture descriptors for the classification of 400 regions of interest (ROIs) extracted from clinical mammograms. ROIs were selected and divided into four classes: AD, normal tissue, microcalcifications and masses. Feature vectors were used as input to a multilayer perceptron neural network, with a single hidden layer. Results showed that LMP is a good descriptor to distinguish AD from other anomalies in digital mammography. LMP performance was slightly better than the LBP and comparable to Haralick's descriptors (mean classification accuracy = 83%).

  6. 28 CFR 523.30 - What is educational good time sentence credit?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 28 Judicial Administration 2 2010-07-01 2010-07-01 false What is educational good time sentence credit? 523.30 Section 523.30 Judicial Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE District of Columbia Educational Good Time...

  7. 19 CFR 10.459 - De minimis.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... good that does not undergo a change in tariff classification pursuant to General Note 26(n), HTSUS, will nonetheless be considered to be an originating good if— (1) The value of all non-originating materials that are used in the production of the good and do not undergo the applicable change in tariff...

  8. 19 CFR 10.459 - De minimis.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... good that does not undergo a change in tariff classification pursuant to General Note 26(n), HTSUS, will nonetheless be considered to be an originating good if— (1) The value of all non-originating materials that are used in the production of the good and do not undergo the applicable change in tariff...

  9. 19 CFR 10.459 - De minimis.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... good that does not undergo a change in tariff classification pursuant to General Note 26(n), HTSUS, will nonetheless be considered to be an originating good if— (1) The value of all non-originating materials that are used in the production of the good and do not undergo the applicable change in tariff...

  10. 28 CFR 523.30 - What is educational good time sentence credit?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 28 Judicial Administration 2 2011-07-01 2011-07-01 false What is educational good time sentence credit? 523.30 Section 523.30 Judicial Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE District of Columbia Educational Good Time...

  11. 28 CFR 523.1 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.1 Definitions. (a) Statutory good time means a credit to a sentence as authorized by 18 U.S.C. 4161. The total amount of statutory good time which an inmate is entitled to have...

  12. 28 CFR 523.1 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.1 Definitions. (a) Statutory good time means a credit to a sentence as authorized by 18 U.S.C. 4161. The total amount of statutory good time which an inmate is entitled to have...

  13. 28 CFR 523.1 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.1 Definitions. (a) Statutory good time means a credit to a sentence as authorized by 18 U.S.C. 4161. The total amount of statutory good time which an inmate is entitled to have...

  14. 28 CFR 523.1 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.1 Definitions. (a) Statutory good time means a credit to a sentence as authorized by 18 U.S.C. 4161. The total amount of statutory good time which an inmate is entitled to have...

  15. 28 CFR 523.1 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER COMPUTATION OF SENTENCE Good Time § 523.1 Definitions. (a) Statutory good time means a credit to a sentence as authorized by 18 U.S.C. 4161. The total amount of statutory good time which an inmate is entitled to have...

  16. Style-based classification of Chinese ink and wash paintings

    NASA Astrophysics Data System (ADS)

    Sheng, Jiachuan; Jiang, Jianmin

    2013-09-01

    Following the fact that a large collection of ink and wash paintings (IWP) is being digitized and made available on the Internet, their automated content description, analysis, and management are attracting attention across research communities. While existing research in relevant areas is primarily focused on image processing approaches, a style-based algorithm is proposed to classify IWPs automatically by their authors. As IWPs do not have colors or even tones, the proposed algorithm applies edge detection to locate the local region and detect painting strokes to enable histogram-based feature extraction and capture of important cues to reflect the styles of different artists. Such features are then applied to drive a number of neural networks in parallel to complete the classification, and an information entropy balanced fusion is proposed to make an integrated decision for the multiple neural network classification results in which the entropy is used as a pointer to combine the global and local features. Evaluations via experiments support that the proposed algorithm achieves good performances, providing excellent potential for computerized analysis and management of IWPs.

  17. [Application of risk grading and classification for occupational hazards in risk management for a shipbuilding project].

    PubMed

    Zeng, Wenfeng; Tan, Qiang; Wu, Shihua; Deng, Yingcong; Liu, Lifen; Wang, Zhi; Liu, Yimin

    2015-12-01

    To investigate the application of risk grading and classification for occupational hazards in risk management for a shipbuilding project. The risk management for this shipbuilding project was performed by a comprehensive application of MES evaluation, quality assessment of occupational health management, and risk grading and classification for occupational hazards, through the methods of occupational health survey, occupational health testing, and occupational health examinations. The results of MES evaluation showed that the risk of occupational hazards in this project was grade 3, which was considered as significant risk; Q value calculated by quality assessment of occupational health management was 0.52, which was considered to be unqualified; the comprehensive evaluation with these two methods showed that the integrated risk rating for this shipbuilding project was class D, and follow- up and rectification were needed with a focus on the improvement in health management. The application of MES evaluation and quality assessment of occupational health management in risk management for occupational hazards can achieve objective and reasonable conclusions and has good applicability.

  18. 42 CFR 413.333 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... relative difference in resource intensity among different groups in the resident classification system... goods and services included in covered skilled nursing services. Resident classification system means a... 1, 2005, an area as defined in § 412.62(f)(1)(iii) of this chapter. For services provided on or...

  19. Spectroscopic parallaxes of MAP region stars from UBVRI, DDO, and uvbyH-beta photometry. [Multichannel Astrometric Photometer for astronomical observation

    NASA Technical Reports Server (NTRS)

    Persinger, Tim; Castelaz, Michael W.

    1990-01-01

    This paper presents the results of spectral type and luminosity classification of reference stars in the Allegheny Observatory MAP parallax program, using broadband and intermediate-band photometry. In addition to the use of UBVRI and DDO photometric systems, the uvbyH-beta photometric system was included for classification of blue (B - V less than 0.6) reference stars. The stellar classifications made from the photometry are used to determine spectroscopic parallaxes. The spectroscopic parallaxes are used in turn to adjust the relative parallaxes measured with the MAP to absolute parallaxes. A new method for dereddening stars using more than one photometric system is presented. In the process of dereddening, visual extinctions, spectral types, and luminosity classes are determined, as well as a measure of the goodness of fit. The measure of goodness of fit quantifies confidence in the stellar classifications. It is found that the spectral types are reliable to within 2.5 spectral subclasses.

  20. Multiparametric fat-water separation method for fast chemical-shift imaging guidance of thermal therapies.

    PubMed

    Lin, Jonathan S; Hwang, Ken-Pin; Jackson, Edward F; Hazle, John D; Stafford, R Jason; Taylor, Brian A

    2013-10-01

    A k-means-based classification algorithm is investigated to assess suitability for rapidly separating and classifying fat/water spectral peaks from a fast chemical shift imaging technique for magnetic resonance temperature imaging. Algorithm testing is performed in simulated mathematical phantoms and agar gel phantoms containing mixed fat/water regions. Proton resonance frequencies (PRFs), apparent spin-spin relaxation (T2*) times, and T1-weighted (T1-W) amplitude values were calculated for each voxel using a single-peak autoregressive moving average (ARMA) signal model. These parameters were then used as criteria for k-means sorting, with the results used to determine PRF ranges of each chemical species cluster for further classification. To detect the presence of secondary chemical species, spectral parameters were recalculated when needed using a two-peak ARMA signal model during the subsequent classification steps. Mathematical phantom simulations involved the modulation of signal-to-noise ratios (SNR), maximum PRF shift (MPS) values, analysis window sizes, and frequency expansion factor sizes in order to characterize the algorithm performance across a variety of conditions. In agar, images were collected on a 1.5T clinical MR scanner using acquisition parameters close to simulation, and algorithm performance was assessed by comparing classification results to manually segmented maps of the fat/water regions. Performance was characterized quantitatively using the Dice Similarity Coefficient (DSC), sensitivity, and specificity. The simulated mathematical phantom experiments demonstrated good fat/water separation depending on conditions, specifically high SNR, moderate MPS value, small analysis window size, and low but nonzero frequency expansion factor size. Physical phantom results demonstrated good identification for both water (0.997 ± 0.001, 0.999 ± 0.001, and 0.986 ± 0.001 for DSC, sensitivity, and specificity, respectively) and fat (0.763 ± 0.006, 0.980 ± 0.004, and 0.941 ± 0.002 for DSC, sensitivity, and specificity, respectively). Temperature uncertainties, based on PRF uncertainties from a 5 × 5-voxel ROI, were 0.342 and 0.351°C for pure and mixed fat/water regions, respectively. Algorithm speed was tested using 25 × 25-voxel and whole image ROIs containing both fat and water, resulting in average processing times per acquisition of 2.00 ± 0.07 s and 146 ± 1 s, respectively, using uncompiled MATLAB scripts running on a shared CPU server with eight Intel Xeon(TM) E5640 quad-core processors (2.66 GHz, 12 MB cache) and 12 GB RAM. Results from both the mathematical and physical phantom suggest the k-means-based classification algorithm could be useful for rapid, dynamic imaging in an ROI for thermal interventions. Successful separation of fat/water information would aid in reducing errors from the nontemperature sensitive fat PRF, as well as potentially facilitate using fat as an internal reference for PRF shift thermometry when appropriate. Additionally, the T1-W or R2* signals may be used for monitoring temperature in surrounding adipose tissue.

  1. Determination of origin and sugars of citrus fruits using genetic algorithm, correspondence analysis and partial least square combined with fiber optic NIR spectroscopy

    NASA Astrophysics Data System (ADS)

    Tewari, Jagdish C.; Dixit, Vivechana; Cho, Byoung-Kwan; Malik, Kamal A.

    2008-12-01

    The capacity to confirm the variety or origin and the estimation of sucrose, glucose, fructose of the citrus fruits are major interests of citrus juice industry. A rapid classification and quantification technique was developed and validated for simultaneous and nondestructive quantifying the sugar constituent's concentrations and the origin of citrus fruits using Fourier Transform Near-Infrared (FT-NIR) spectroscopy in conjunction with Artificial Neural Network (ANN) using genetic algorithm, Chemometrics and Correspondences Analysis (CA). To acquire good classification accuracy and to present a wide range of concentration of sucrose, glucose and fructose, we have collected 22 different varieties of citrus fruits from the market during the entire season of citruses. FT-NIR spectra were recorded in the NIR region from 1100 to 2500 nm using the fiber optic probe and three types of data analysis were performed. Chemometrics analysis using Partial Least Squares (PLS) was performed in order to determine the concentration of individual sugars. Artificial Neural Network analysis was performed for classification, origin or variety identification of citrus fruits using genetic algorithm. Correspondence analysis was performed in order to visualize the relationship between the citrus fruits. To compute a PLS model based upon the reference values and to validate the developed method, high performance liquid chromatography (HPLC) was performed. Spectral range and the number of PLS factors were optimized for the lowest standard error of calibration (SEC), prediction (SEP) and correlation coefficient ( R2). The calibration model developed was able to assess the sucrose, glucose and fructose contents in unknown citrus fruit up to an R2 value of 0.996-0.998. Numbers of factors from F1 to F10 were optimized for correspondence analysis for relationship visualization of citrus fruits based on the output values of genetic algorithm. ANN and CA analysis showed excellent classification of citrus according to the variety to which they belong and well-classified citrus according to their origin. The technique has potential in rapid determination of sugars content and to identify different varieties and origins of citrus in citrus juice industry.

  2. Determination of origin and sugars of citrus fruits using genetic algorithm, correspondence analysis and partial least square combined with fiber optic NIR spectroscopy.

    PubMed

    Tewari, Jagdish C; Dixit, Vivechana; Cho, Byoung-Kwan; Malik, Kamal A

    2008-12-01

    The capacity to confirm the variety or origin and the estimation of sucrose, glucose, fructose of the citrus fruits are major interests of citrus juice industry. A rapid classification and quantification technique was developed and validated for simultaneous and nondestructive quantifying the sugar constituent's concentrations and the origin of citrus fruits using Fourier Transform Near-Infrared (FT-NIR) spectroscopy in conjunction with Artificial Neural Network (ANN) using genetic algorithm, Chemometrics and Correspondences Analysis (CA). To acquire good classification accuracy and to present a wide range of concentration of sucrose, glucose and fructose, we have collected 22 different varieties of citrus fruits from the market during the entire season of citruses. FT-NIR spectra were recorded in the NIR region from 1,100 to 2,500 nm using the fiber optic probe and three types of data analysis were performed. Chemometrics analysis using Partial Least Squares (PLS) was performed in order to determine the concentration of individual sugars. Artificial Neural Network analysis was performed for classification, origin or variety identification of citrus fruits using genetic algorithm. Correspondence analysis was performed in order to visualize the relationship between the citrus fruits. To compute a PLS model based upon the reference values and to validate the developed method, high performance liquid chromatography (HPLC) was performed. Spectral range and the number of PLS factors were optimized for the lowest standard error of calibration (SEC), prediction (SEP) and correlation coefficient (R(2)). The calibration model developed was able to assess the sucrose, glucose and fructose contents in unknown citrus fruit up to an R(2) value of 0.996-0.998. Numbers of factors from F1 to F10 were optimized for correspondence analysis for relationship visualization of citrus fruits based on the output values of genetic algorithm. ANN and CA analysis showed excellent classification of citrus according to the variety to which they belong and well-classified citrus according to their origin. The technique has potential in rapid determination of sugars content and to identify different varieties and origins of citrus in citrus juice industry.

  3. Detecting Diseases in Medical Prescriptions Using Data Mining Tools and Combining Techniques.

    PubMed

    Teimouri, Mehdi; Farzadfar, Farshad; Soudi Alamdari, Mahsa; Hashemi-Meshkini, Amir; Adibi Alamdari, Parisa; Rezaei-Darzi, Ehsan; Varmaghani, Mehdi; Zeynalabedini, Aysan

    2016-01-01

    Data about the prevalence of communicable and non-communicable diseases, as one of the most important categories of epidemiological data, is used for interpreting health status of communities. This study aims to calculate the prevalence of outpatient diseases through the characterization of outpatient prescriptions. The data used in this study is collected from 1412 prescriptions for various types of diseases from which we have focused on the identification of ten diseases. In this study, data mining tools are used to identify diseases for which prescriptions are written. In order to evaluate the performances of these methods, we compare the results with Naïve method. Then, combining methods are used to improve the results. Results showed that Support Vector Machine, with an accuracy of 95.32%, shows better performance than the other methods. The result of Naive method, with an accuracy of 67.71%, is 20% worse than Nearest Neighbor method which has the lowest level of accuracy among the other classification algorithms. The results indicate that the implementation of data mining algorithms resulted in a good performance in characterization of outpatient diseases. These results can help to choose appropriate methods for the classification of prescriptions in larger scales.

  4. A Cross-Correlated Delay Shift Supervised Learning Method for Spiking Neurons with Application to Interictal Spike Detection in Epilepsy.

    PubMed

    Guo, Lilin; Wang, Zhenzhong; Cabrerizo, Mercedes; Adjouadi, Malek

    2017-05-01

    This study introduces a novel learning algorithm for spiking neurons, called CCDS, which is able to learn and reproduce arbitrary spike patterns in a supervised fashion allowing the processing of spatiotemporal information encoded in the precise timing of spikes. Unlike the Remote Supervised Method (ReSuMe), synapse delays and axonal delays in CCDS are variants which are modulated together with weights during learning. The CCDS rule is both biologically plausible and computationally efficient. The properties of this learning rule are investigated extensively through experimental evaluations in terms of reliability, adaptive learning performance, generality to different neuron models, learning in the presence of noise, effects of its learning parameters and classification performance. Results presented show that the CCDS learning method achieves learning accuracy and learning speed comparable with ReSuMe, but improves classification accuracy when compared to both the Spike Pattern Association Neuron (SPAN) learning rule and the Tempotron learning rule. The merit of CCDS rule is further validated on a practical example involving the automated detection of interictal spikes in EEG records of patients with epilepsy. Results again show that with proper encoding, the CCDS rule achieves good recognition performance.

  5. Classification of light sources and their interaction with active and passive environments

    NASA Astrophysics Data System (ADS)

    El-Dardiry, Ramy G. S.; Faez, Sanli; Lagendijk, Ad

    2011-03-01

    Emission from a molecular light source depends on its optical and chemical environment. This dependence is different for various sources. We present a general classification in terms of constant-amplitude and constant-power sources. Using this classification, we have described the response to both changes in the local density of states and stimulated emission. The unforeseen consequences of this classification are illustrated for photonic studies by random laser experiments and are in good agreement with our correspondingly developed theory. Our results require a revision of studies on sources in complex media.

  6. Standoff detection of bioaerosols over wide area using a newly developed sensor combining a cloud mapper and a spectrometric LIF lidar

    NASA Astrophysics Data System (ADS)

    Buteau, Sylvie; Simard, Jean-Robert; Roy, Gilles; Lahaie, Pierre; Nadeau, Denis; Mathieu, Pierre

    2013-10-01

    A standoff sensor called BioSense was developed to demonstrate the capacity to map, track and classify bioaerosol clouds from a distant range and over wide area. The concept of the system is based on a two steps dynamic surveillance: 1) cloud detection using an infrared (IR) scanning cloud mapper and 2) cloud classification based on a staring ultraviolet (UV) Laser Induced Fluorescence (LIF) interrogation. The system can be operated either in an automatic surveillance mode or using manual intervention. The automatic surveillance operation includes several steps: mission planning, sensor deployment, background monitoring, surveillance, cloud detection, classification and finally alarm generation based on the classification result. One of the main challenges is the classification step which relies on a spectrally resolved UV LIF signature library. The construction of this library relies currently on in-chamber releases of various materials that are simultaneously characterized with the standoff sensor and referenced with point sensors such as Aerodynamic Particle Sizer® (APS). The system was tested at three different locations in order to evaluate its capacity to operate in diverse types of surroundings and various environmental conditions. The system showed generally good performances even though the troubleshooting of the system was not completed before initiating the Test and Evaluation (T&E) process. The standoff system performances appeared to be highly dependent on the type of challenges, on the climatic conditions and on the period of day. The real-time results combined with the experience acquired during the 2012 T & E allowed to identify future ameliorations and investigation avenues.

  7. Extreme Sparse Multinomial Logistic Regression: A Fast and Robust Framework for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Cao, Faxian; Yang, Zhijing; Ren, Jinchang; Ling, Wing-Kuen; Zhao, Huimin; Marshall, Stephen

    2017-12-01

    Although the sparse multinomial logistic regression (SMLR) has provided a useful tool for sparse classification, it suffers from inefficacy in dealing with high dimensional features and manually set initial regressor values. This has significantly constrained its applications for hyperspectral image (HSI) classification. In order to tackle these two drawbacks, an extreme sparse multinomial logistic regression (ESMLR) is proposed for effective classification of HSI. First, the HSI dataset is projected to a new feature space with randomly generated weight and bias. Second, an optimization model is established by the Lagrange multiplier method and the dual principle to automatically determine a good initial regressor for SMLR via minimizing the training error and the regressor value. Furthermore, the extended multi-attribute profiles (EMAPs) are utilized for extracting both the spectral and spatial features. A combinational linear multiple features learning (MFL) method is proposed to further enhance the features extracted by ESMLR and EMAPs. Finally, the logistic regression via the variable splitting and the augmented Lagrangian (LORSAL) is adopted in the proposed framework for reducing the computational time. Experiments are conducted on two well-known HSI datasets, namely the Indian Pines dataset and the Pavia University dataset, which have shown the fast and robust performance of the proposed ESMLR framework.

  8. Application of Neural Networks for classification of Patau, Edwards, Down, Turner and Klinefelter Syndrome based on first trimester maternal serum screening data, ultrasonographic findings and patient demographics.

    PubMed

    Catic, Aida; Gurbeta, Lejla; Kurtovic-Kozaric, Amina; Mehmedbasic, Senad; Badnjevic, Almir

    2018-02-13

    The usage of Artificial Neural Networks (ANNs) for genome-enabled classifications and establishing genome-phenotype correlations have been investigated more extensively over the past few years. The reason for this is that ANNs are good approximates of complex functions, so classification can be performed without the need for explicitly defined input-output model. This engineering tool can be applied for optimization of existing methods for disease/syndrome classification. Cytogenetic and molecular analyses are the most frequent tests used in prenatal diagnostic for the early detection of Turner, Klinefelter, Patau, Edwards and Down syndrome. These procedures can be lengthy, repetitive; and often employ invasive techniques so a robust automated method for classifying and reporting prenatal diagnostics would greatly help the clinicians with their routine work. The database consisted of data collected from 2500 pregnant woman that came to the Institute of Gynecology, Infertility and Perinatology "Mehmedbasic" for routine antenatal care between January 2000 and December 2016. During first trimester all women were subject to screening test where values of maternal serum pregnancy-associated plasma protein A (PAPP-A) and free beta human chorionic gonadotropin (β-hCG) were measured. Also, fetal nuchal translucency thickness and the presence or absence of the nasal bone was observed using ultrasound. The architectures of linear feedforward and feedback neural networks were investigated for various training data distributions and number of neurons in hidden layer. Feedback neural network architecture out performed feedforward neural network architecture in predictive ability for all five aneuploidy prenatal syndrome classes. Feedforward neural network with 15 neurons in hidden layer achieved classification sensitivity of 92.00%. Classification sensitivity of feedback (Elman's) neural network was 99.00%. Average accuracy of feedforward neural network was 89.6% and for feedback was 98.8%. The results presented in this paper prove that an expert diagnostic system based on neural networks can be efficiently used for classification of five aneuploidy syndromes, covered with this study, based on first trimester maternal serum screening data, ultrasonographic findings and patient demographics. Developed Expert System proved to be simple, robust, and powerful in properly classifying prenatal aneuploidy syndromes.

  9. Preoperative classification assessment reliability and influence on the length of intertrochanteric fracture operations.

    PubMed

    Shen, Jing; Hu, FangKe; Zhang, LiHai; Tang, PeiFu; Bi, ZhengGang

    2013-04-01

    The accuracy of intertrochanteric fracture classification is important; indeed, the patient outcomes are dependent on their classification. The aim of this study was to use the AO classification system to evaluate the variation in classification between X-ray and computed tomography (CT)/3D CT images. Then, differences in the length of surgery were evaluated based on two examinations. Intertrochanteric fractures were reviewed and surgeons were interviewed. The rates of correct discrimination and misclassification (overestimates and underestimates) probabilities were determined. The impact of misclassification on length of surgery was also evaluated. In total, 370 patents and four surgeons were included in the study. All patients had X-ray images and 210 patients had CT/3D CT images. Of them, 214 and 156 patients were treated by intramedullary and extramedullary fixation systems, respectively. The mean length of surgery was 62.1 ± 17.7 min. The overall rate of correct discrimination was 83.8 % and in the classification of A1, A2 and A3 were 80.0, 85.7 and 82.4 %, respectively. The rate of misclassification showed no significant difference between stable and unstable fractures (21.3 vs 13.1 %, P = 0.173). The overall rates of overestimates and underestimates were significantly different (5 vs 11.25 %, P = 0.041). Subtracting the rate of overestimates from underestimates had a positive correlation with prolonged surgery and showed a significant difference with intramedullary fixation (P < 0.001). Classification based on the AO system was good in terms of consistency. CT/3D CT examination was more reliable and more helpful for preoperative assessment, especially for performance of an intramedullary fixation.

  10. Vessel Classification in Cosmo-Skymed SAR Data Using Hierarchical Feature Selection

    NASA Astrophysics Data System (ADS)

    Makedonas, A.; Theoharatos, C.; Tsagaris, V.; Anastasopoulos, V.; Costicoglou, S.

    2015-04-01

    SAR based ship detection and classification are important elements of maritime monitoring applications. Recently, high-resolution SAR data have opened new possibilities to researchers for achieving improved classification results. In this work, a hierarchical vessel classification procedure is presented based on a robust feature extraction and selection scheme that utilizes scale, shape and texture features in a hierarchical way. Initially, different types of feature extraction algorithms are implemented in order to form the utilized feature pool, able to represent the structure, material, orientation and other vessel type characteristics. A two-stage hierarchical feature selection algorithm is utilized next in order to be able to discriminate effectively civilian vessels into three distinct types, in COSMO-SkyMed SAR images: cargos, small ships and tankers. In our analysis, scale and shape features are utilized in order to discriminate smaller types of vessels present in the available SAR data, or shape specific vessels. Then, the most informative texture and intensity features are incorporated in order to be able to better distinguish the civilian types with high accuracy. A feature selection procedure that utilizes heuristic measures based on features' statistical characteristics, followed by an exhaustive research with feature sets formed by the most qualified features is carried out, in order to discriminate the most appropriate combination of features for the final classification. In our analysis, five COSMO-SkyMed SAR data with 2.2m x 2.2m resolution were used to analyse the detailed characteristics of these types of ships. A total of 111 ships with available AIS data were used in the classification process. The experimental results show that this method has good performance in ship classification, with an overall accuracy reaching 83%. Further investigation of additional features and proper feature selection is currently in progress.

  11. Alzheimer disease detection from structural MR images using FCM based weighted probabilistic neural network.

    PubMed

    Duraisamy, Baskar; Shanmugam, Jayanthi Venkatraman; Annamalai, Jayanthi

    2018-02-19

    An early intervention of Alzheimer's disease (AD) is highly essential due to the fact that this neuro degenerative disease generates major life-threatening issues, especially memory loss among patients in society. Moreover, categorizing NC (Normal Control), MCI (Mild Cognitive Impairment) and AD early in course allows the patients to experience benefits from new treatments. Therefore, it is important to construct a reliable classification technique to discriminate the patients with or without AD from the bio medical imaging modality. Hence, we developed a novel FCM based Weighted Probabilistic Neural Network (FWPNN) classification algorithm and analyzed the brain images related to structural MRI modality for better discrimination of class labels. Initially our proposed framework begins with brain image normalization stage. In this stage, ROI regions related to Hippo-Campus (HC) and Posterior Cingulate Cortex (PCC) from the brain images are extracted using Automated Anatomical Labeling (AAL) method. Subsequently, nineteen highly relevant AD related features are selected through Multiple-criterion feature selection method. At last, our novel FWPNN classification algorithm is imposed to remove suspicious samples from the training data with an end goal to enhance the classification performance. This newly developed classification algorithm combines both the goodness of supervised and unsupervised learning techniques. The experimental validation is carried out with the ADNI subset and then to the Bordex-3 city dataset. Our proposed classification approach achieves an accuracy of about 98.63%, 95.4%, 96.4% in terms of classification with AD vs NC, MCI vs NC and AD vs MCI. The experimental results suggest that the removal of noisy samples from the training data can enhance the decision generation process of the expert systems.

  12. Agricultural Land Cover from Multitemporal C-Band SAR Data

    NASA Astrophysics Data System (ADS)

    Skriver, H.

    2013-12-01

    Henning Skriver DTU Space, Technical University of Denmark Ørsteds Plads, Building 348, DK-2800 Lyngby e-mail: hs@space.dtu.dk Problem description This paper focuses on land cover type from SAR data using high revisit acquisitions, including single and dual polarisation and fully polarimetric data, at C-band. The data set were acquired during an ESA-supported campaign, AgriSAR09, with the Radarsat-2 system. Ground surveys to obtain detailed land cover maps were performed during the campaign. Classification methods using single- and dual-polarisation data, and fully polarimetric data are used with multitemporal data with short revisit time. Results for airborne campaigns have previously been reported in Skriver et al. (2011) and Skriver (2012). In this paper, the short revisit satellite SAR data will be used to assess the trade-off between polarimetric SAR data and data as single or dual polarisation SAR data. This is particularly important in relation to the future GMES Sentinel-1 SAR satellites, where two satellites with a relatively wide swath will ensure a short revisit time globally. Questions dealt with are: which accuracy can we expect from a mission like the Sentinel-1, what is the improvement of using polarimetric SAR compared to single or dual polarisation SAR, and what is the optimum number of acquisitions needed. Methodology The data have sufficient number of looks for the Gaussian assumption to be valid for the backscatter coefficients for the individual polarizations. The classification method used for these data is therefore the standard Bayesian classification method for multivariate Gaussian statistics. For the full-polarimetric cases two classification methods have been applied, the standard ML Wishart classifier, and a method based on a reversible transform of the covariance matrix into backscatter intensities. The following pre-processing steps were performed on both data sets: The scattering matrix data in the form of SLC products were coregistered, converted to covariance matrix format and multilooked to a specific equivalent number of looks. Results The multitemporal data improve significantly the classification results, and single acquisition data cannot provide the necessary classification performance. The multitemporal data are especially important for the single and dual polarization data, but less important for the fully polarimetric data. The satellite data set produces realistic classification results based on about 2000 fields. The best classification results for the single-polarized mode provide classification errors in the mid-twenties. Using the dual-polarized mode reduces the classification error with about 5 percentage points, whereas the polarimetric mode reduces it with about 10 percentage points. These results show, that it will be possible to obtain reasonable results with relatively simple systems with short revisit time. This very important result shows that systems like the Sentinel-1 mission will be able to produce fairly good results for global land cover classification. References Skriver, H. et al., 2011, 'Crop Classification using Short-Revisit Multitemporal SAR Data', IEEE J. Sel. Topics in Appl. Earth Obs. Rem. Sens., vol. 4, pp. 423-431. Skriver, H., 2012, 'Crop classification by multitemporal C- and L-band single- and dual-polarization and fully polarimetric SAR', IEEE Trans. Geosc. Rem. Sens., vol. 50, pp. 2138-2149.

  13. Rapid Elemental Analysis and Provenance Study of Blumea balsamifera DC Using Laser-Induced Breakdown Spectroscopy

    PubMed Central

    Liu, Xiaona; Zhang, Qiao; Wu, Zhisheng; Shi, Xinyuan; Zhao, Na; Qiao, Yanjiang

    2015-01-01

    Laser-induced breakdown spectroscopy (LIBS) was applied to perform a rapid elemental analysis and provenance study of Blumea balsamifera DC. Principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) were implemented to exploit the multivariate nature of the LIBS data. Scores and loadings of computed principal components visually illustrated the differing spectral data. The PLS-DA algorithm showed good classification performance. The PLS-DA model using complete spectra as input variables had similar discrimination performance to using selected spectral lines as input variables. The down-selection of spectral lines was specifically focused on the major elements of B. balsamifera samples. Results indicated that LIBS could be used to rapidly analyze elements and to perform provenance study of B. balsamifera. PMID:25558999

  14. Automatic Classification of Aerial Imagery for Urban Hydrological Applications

    NASA Astrophysics Data System (ADS)

    Paul, A.; Yang, C.; Breitkopf, U.; Liu, Y.; Wang, Z.; Rottensteiner, F.; Wallner, M.; Verworn, A.; Heipke, C.

    2018-04-01

    In this paper we investigate the potential of automatic supervised classification for urban hydrological applications. In particular, we contribute to runoff simulations using hydrodynamic urban drainage models. In order to assess whether the capacity of the sewers is sufficient to avoid surcharge within certain return periods, precipitation is transformed into runoff. The transformation of precipitation into runoff requires knowledge about the proportion of drainage-effective areas and their spatial distribution in the catchment area. Common simulation methods use the coefficient of imperviousness as an important parameter to estimate the overland flow, which subsequently contributes to the pipe flow. The coefficient of imperviousness is the percentage of area covered by impervious surfaces such as roofs or road surfaces. It is still common practice to assign the coefficient of imperviousness for each particular land parcel manually by visual interpretation of aerial images. Based on classification results of these imagery we contribute to an objective automatic determination of the coefficient of imperviousness. In this context we compare two classification techniques: Random Forests (RF) and Conditional Random Fields (CRF). Experimental results performed on an urban test area show good results and confirm that the automated derivation of the coefficient of imperviousness, apart from being more objective and, thus, reproducible, delivers more accurate results than the interactive estimation. We achieve an overall accuracy of about 85 % for both classifiers. The root mean square error of the differences of the coefficient of imperviousness compared to the reference is 4.4 % for the CRF-based classification, and 3.8 % for the RF-based classification.

  15. Influence of nuclei segmentation on breast cancer malignancy classification

    NASA Astrophysics Data System (ADS)

    Jelen, Lukasz; Fevens, Thomas; Krzyzak, Adam

    2009-02-01

    Breast Cancer is one of the most deadly cancers affecting middle-aged women. Accurate diagnosis and prognosis are crucial to reduce the high death rate. Nowadays there are numerous diagnostic tools for breast cancer diagnosis. In this paper we discuss a role of nuclear segmentation from fine needle aspiration biopsy (FNA) slides and its influence on malignancy classification. Classification of malignancy plays a very important role during the diagnosis process of breast cancer. Out of all cancer diagnostic tools, FNA slides provide the most valuable information about the cancer malignancy grade which helps to choose an appropriate treatment. This process involves assessing numerous nuclear features and therefore precise segmentation of nuclei is very important. In this work we compare three powerful segmentation approaches and test their impact on the classification of breast cancer malignancy. The studied approaches involve level set segmentation, fuzzy c-means segmentation and textural segmentation based on co-occurrence matrix. Segmented nuclei were used to extract nuclear features for malignancy classification. For classification purposes four different classifiers were trained and tested with previously extracted features. The compared classifiers are Multilayer Perceptron (MLP), Self-Organizing Maps (SOM), Principal Component-based Neural Network (PCA) and Support Vector Machines (SVM). The presented results show that level set segmentation yields the best results over the three compared approaches and leads to a good feature extraction with a lowest average error rate of 6.51% over four different classifiers. The best performance was recorded for multilayer perceptron with an error rate of 3.07% using fuzzy c-means segmentation.

  16. 28 CFR 522.15 - No good time credits for inmates serving only civil contempt commitments.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 28 Judicial Administration 2 2012-07-01 2012-07-01 false No good time credits for inmates serving..., DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER ADMISSION TO INSTITUTION Civil Contempt of Court Commitments § 522.15 No good time credits for inmates serving only civil contempt...

  17. 28 CFR 522.15 - No good time credits for inmates serving only civil contempt commitments.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 28 Judicial Administration 2 2013-07-01 2013-07-01 false No good time credits for inmates serving..., DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER ADMISSION TO INSTITUTION Civil Contempt of Court Commitments § 522.15 No good time credits for inmates serving only civil contempt...

  18. Assessment of repeatability of composition of perfumed waters by high-performance liquid chromatography combined with numerical data analysis based on cluster analysis (HPLC UV/VIS - CA).

    PubMed

    Ruzik, L; Obarski, N; Papierz, A; Mojski, M

    2015-06-01

    High-performance liquid chromatography (HPLC) with UV/VIS spectrophotometric detection combined with the chemometric method of cluster analysis (CA) was used for the assessment of repeatability of composition of nine types of perfumed waters. In addition, the chromatographic method of separating components of the perfume waters under analysis was subjected to an optimization procedure. The chromatograms thus obtained were used as sources of data for the chemometric method of cluster analysis (CA). The result was a classification of a set comprising 39 perfumed water samples with a similar composition at a specified level of probability (level of agglomeration). A comparison of the classification with the manufacturer's declarations reveals a good degree of consistency and demonstrates similarity between samples in different classes. A combination of the chromatographic method with cluster analysis (HPLC UV/VIS - CA) makes it possible to quickly assess the repeatability of composition of perfumed waters at selected levels of probability. © 2014 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  19. Validation of AN Hplc-Dad Method for the Classification of Green Teas

    NASA Astrophysics Data System (ADS)

    Yu, Jingbo; Ye, Nengsheng; Gu, Xuexin; Liu, Ni

    A reversed phase high performance liquid chromatography (RP-HPLC) separation coupled with diode array detection (DAD) and electrospray ionization mass spectrometer (ESI/MS) was developed and optimized for the classification of green teas. Five catechins [epigallocatechin (EGC), epigallocatechin gallate (EGCG), epicatechin (EC), gallocatechin gallate (GCG), epicatechin gallate (ECG)] had been identified and quantified by the HPLC-DAD-ESI/MS/MS method. The limit of detection (LOD) of five catechins was within the range of 1.25-15 ng. All the analytes exhibited good linearity up to 2500 ng. These compounds were considered as chemical descriptors to define groups of green teas. Chemometric methods including principal component analysis (PCA) and hierarchical cluster analysis (HCA) were applied for the purpose. Twelve green tea samples originating from different regions were subjected to reveal the natural groups. The results showed that the analyzed green teas were differentiated mainly by provenance; HCA afforded an excellent performance in terms of recognition and prediction abilities. This method was accurate and reproducible, providing a potential approach for authentication of green teas.

  20. Quantitative determination and classification of energy drinks using near-infrared spectroscopy.

    PubMed

    Rácz, Anita; Héberger, Károly; Fodor, Marietta

    2016-09-01

    Almost a hundred commercially available energy drink samples from Hungary, Slovakia, and Greece were collected for the quantitative determination of their caffeine and sugar content with FT-NIR spectroscopy and high-performance liquid chromatography (HPLC). Calibration models were built with partial least-squares regression (PLSR). An HPLC-UV method was used to measure the reference values for caffeine content, while sugar contents were measured with the Schoorl method. Both the nominal sugar content (as indicated on the cans) and the measured sugar concentration were used as references. Although the Schoorl method has larger error and bias, appropriate models could be developed using both references. The validation of the models was based on sevenfold cross-validation and external validation. FT-NIR analysis is a good candidate to replace the HPLC-UV method, because it is much cheaper than any chromatographic method, while it is also more time-efficient. The combination of FT-NIR with multidimensional chemometric techniques like PLSR can be a good option for the detection of low caffeine concentrations in energy drinks. Moreover, three types of energy drinks that contain (i) taurine, (ii) arginine, and (iii) none of these two components were classified correctly using principal component analysis and linear discriminant analysis. Such classifications are important for the detection of adulterated samples and for quality control, as well. In this case, more than a hundred samples were used for the evaluation. The classification was validated with cross-validation and several randomization tests (X-scrambling). Graphical Abstract The way of energy drinks from cans to appropriate chemometric models.

  1. Inter-observer reliability of radiographic classifications and measurements in the assessment of Perthes' disease.

    PubMed

    Wiig, Ola; Terjesen, Terje; Svenningsen, Svein

    2002-10-01

    We evaluated the inter-observer agreement of radiographic methods when evaluating patients with Perthes' disease. The radiographs were assessed at the time of diagnosis and at the 1-year follow-up by local orthopaedic surgeons (O) and 2 experienced pediatric orthopedic surgeons (TT and SS). The Catterall, Salter-Thompson, and Herring lateral pillar classifications were compared, and the femoral head coverage (FHC), center-edge angle (CE-angle), and articulo-trochanteric distance (ATD) were measured in the affected and normal hips. On the primary evaluation, the lateral pillar and Salter-Thompson classifications had a higher level of agreement among the observers than the Catterall classification, but none of the classifications showed good agreement (weighted kappa values between O and SS 0.56, 0.54, 0.49, respectively). Combining Catterall groups 1 and 2 into one group, and groups 3 and 4 into another resulted in better agreement (kappa 0.55) than with the original 4-group system. The agreement was also better (kappa 0.62-0.70) between experienced than between less experienced examiners for all classifications. The femoral head coverage was a more reliable and accurate measure than the CE-angle for quantifying the acetabular covering of the femoral head, as indicated by higher intraclass correlation coefficients (ICC) and smaller inter-observer differences. The ATD showed good agreement in all comparisons and had low interobserver differences. We conclude that all classifications of femoral head involvement are adequate in clinical work if the radiographic assessment is done by experienced examiners. When they are less experienced examiners, a 2-group classification or the lateral pillar classification is more reliable. For evaluation of containment of the femoral head, FHC is more appropriate than the CE-angle.

  2. A system for automatic artifact removal in ictal scalp EEG based on independent component analysis and Bayesian classification.

    PubMed

    LeVan, P; Urrestarazu, E; Gotman, J

    2006-04-01

    To devise an automated system to remove artifacts from ictal scalp EEG, using independent component analysis (ICA). A Bayesian classifier was used to determine the probability that 2s epochs of seizure segments decomposed by ICA represented EEG activity, as opposed to artifact. The classifier was trained using numerous statistical, spectral, and spatial features. The system's performance was then assessed using separate validation data. The classifier identified epochs representing EEG activity in the validation dataset with a sensitivity of 82.4% and a specificity of 83.3%. An ICA component was considered to represent EEG activity if the sum of the probabilities that its epochs represented EEG exceeded a threshold predetermined using the training data. Otherwise, the component represented artifact. Using this threshold on the validation set, the identification of EEG components was performed with a sensitivity of 87.6% and a specificity of 70.2%. Most misclassified components were a mixture of EEG and artifactual activity. The automated system successfully rejected a good proportion of artifactual components extracted by ICA, while preserving almost all EEG components. The misclassification rate was comparable to the variability observed in human classification. Current ICA methods of artifact removal require a tedious visual classification of the components. The proposed system automates this process and removes simultaneously multiple types of artifacts.

  3. Caracterisation des occupations du sol en milieu urbain par imagerie radar

    NASA Astrophysics Data System (ADS)

    Codjia, Claude

    This study aims to test the relevance of medium and high-resolution SAR images on the characterization of the types of land use in urban areas. To this end, we have relied on textural approaches based on second-order statistics. Specifically, we look for texture parameters most relevant for discriminating urban objects. We have used in this regard Radarsat-1 in fine polarization mode and Radarsat-2 HH fine mode in dual and quad polarization and ultrafine mode HH polarization. The land uses sought were dense building, medium density building, low density building, industrial and institutional buildings, low density vegetation, dense vegetation and water. We have identified nine texture parameters for analysis, grouped into families according to their mathematical definitions in a first step. The parameters of similarity / dissimilarity include Homogeneity, Contrast, the Differential Inverse Moment and Dissimilarity. The parameters of disorder are Entropy and the Second Angular Momentum. The Standard Deviation and Correlation are the dispersion parameters and the Average is a separate family. It is clear from experience that certain combinations of texture parameters from different family used in classifications yield good results while others produce kappa of very little interest. Furthermore, we realize that if the use of several texture parameters improves classifications, its performance ceils from three parameters. The calculation of correlations between the textures and their principal axes confirm the results. Despite the good performance of this approach based on the complementarity of texture parameters, systematic errors due to the cardinal effects remain on classifications. To overcome this problem, a radiometric compensation model was developed based on the radar cross section (SER). A radar simulation from the digital surface model of the environment allowed us to extract the building backscatter zones and to analyze the related backscatter. Thus, we were able to devise a strategy of compensation of cardinal effects solely based on the responses of the objects according to their orientation from the plane of illumination through the radar's beam. It appeared that a compensation algorithm based on the radar cross section was appropriate. Some examples of the application of this algorithm on HH polarized RADARSAT-2 images are presented as well. Application of this algorithm will allow considerable gains with regard to certain forms of automation (classification and segmentation) at the level of radar imagery thus generating a higher level of quality in regard to visual interpretation. Application of this algorithm on RADARSAT-1 and RADARSAT-2 images with HH, HV, VH, and VV polarisations helped make considerable gains and eliminate most of the classification errors due to the cardinal effects.

  4. Diagnosis of streamflow prediction skills in Oregon using Hydrologic Landscape Classification

    EPA Science Inventory

    A complete understanding of why rainfall-runoff models provide good streamflow predictions at catchments in some regions, but fail to do so in other regions, has still not been achieved. Here, we argue that a hydrologic classification system is a robust conceptual tool that is w...

  5. Where and why do models fail? Perspectives from Oregon Hydrologic Landscape classification

    EPA Science Inventory

    A complete understanding of why rainfall-runoff models provide good streamflow predictions at catchments in some regions, but fail to do so in other regions, has still not been achieved. Here, we argue that a hydrologic classification system is a robust conceptual tool that is w...

  6. Analysis of vehicle classification and truck weight data of the New England States : is data sharing a good idea?

    DOT National Transportation Integrated Search

    1998-01-01

    This paper is about a statistical research analysis of 1995-96 classification and weigh in motion : (WIM) data from seventeen continuous traffic-monitoring sites in New England. Data screening is : discussed briefly, and a cusum data quality control ...

  7. 26 CFR 1.471-2 - Valuation of inventories.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... because of damage, imperfections, shop wear, changes of style, odd or broken lots, or other similar causes... classifications indicated above, and he shall maintain such records of the disposition of the goods as will enable... taxpayer. (6) Segregating indirect production costs into fixed and variable production cost classifications...

  8. 26 CFR 1.471-2 - Valuation of inventories.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... because of damage, imperfections, shop wear, changes of style, odd or broken lots, or other similar causes... classifications indicated above, and he shall maintain such records of the disposition of the goods as will enable... taxpayer. (6) Segregating indirect production costs into fixed and variable production cost classifications...

  9. 26 CFR 1.471-2 - Valuation of inventories.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... because of damage, imperfections, shop wear, changes of style, odd or broken lots, or other similar causes... classifications indicated above, and he shall maintain such records of the disposition of the goods as will enable... taxpayer. (6) Segregating indirect production costs into fixed and variable production cost classifications...

  10. 26 CFR 1.471-2 - Valuation of inventories.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... because of damage, imperfections, shop wear, changes of style, odd or broken lots, or other similar causes... classifications indicated above, and he shall maintain such records of the disposition of the goods as will enable... taxpayer. (6) Segregating indirect production costs into fixed and variable production cost classifications...

  11. Fuzzy ontologies for semantic interpretation of remotely sensed images

    NASA Astrophysics Data System (ADS)

    Djerriri, Khelifa; Malki, Mimoun

    2015-10-01

    Object-based image classification consists in the assignment of object that share similar attributes to object categories. To perform such a task the remote sensing expert uses its personal knowledge, which is rarely formalized. Ontologies have been proposed as solution to represent domain knowledge agreed by domain experts in a formal and machine readable language. Classical ontology languages are not appropriate to deal with imprecision or vagueness in knowledge. Fortunately, Description Logics for the semantic web has been enhanced by various approaches to handle such knowledge. This paper presents the extension of the traditional ontology-based interpretation with fuzzy ontology of main land-cover classes in Landsat8-OLI scenes (vegetation, built-up areas, water bodies, shadow, clouds, forests) objects. A good classification of image objects was obtained and the results highlight the potential of the method to be replicated over time and space in the perspective of transferability of the procedure.

  12. Foveation: an alternative method to simultaneously preserve privacy and information in face images

    NASA Astrophysics Data System (ADS)

    Alonso, Víctor E.; Enríquez-Caldera, Rogerio; Sucar, Luis Enrique

    2017-03-01

    This paper presents a real-time foveation technique proposed as an alternative method for image obfuscation while simultaneously preserving privacy in face deidentification. Relevance of the proposed technique is discussed through a comparative study of the most common distortions methods in face images and an assessment on performance and effectiveness of privacy protection. All the different techniques presented here are evaluated when they go through a face recognition software. Evaluating the data utility preservation was carried out under gender and facial expression classification. Results on quantifying the tradeoff between privacy protection and image information preservation at different obfuscation levels are presented. Comparative results using the facial expression subset of the FERET database show that the technique achieves a good tradeoff between privacy and awareness with 30% of recognition rate and a classification accuracy as high as 88% obtained from the common figures of merit using the privacy-awareness map.

  13. On the influence of high-pass filtering on ICA-based artifact reduction in EEG-ERP.

    PubMed

    Winkler, Irene; Debener, Stefan; Müller, Klaus-Robert; Tangermann, Michael

    2015-01-01

    Standard artifact removal methods for electroencephalographic (EEG) signals are either based on Independent Component Analysis (ICA) or they regress out ocular activity measured at electrooculogram (EOG) channels. Successful ICA-based artifact reduction relies on suitable pre-processing. Here we systematically evaluate the effects of high-pass filtering at different frequencies. Offline analyses were based on event-related potential data from 21 participants performing a standard auditory oddball task and an automatic artifactual component classifier method (MARA). As a pre-processing step for ICA, high-pass filtering between 1-2 Hz consistently produced good results in terms of signal-to-noise ratio (SNR), single-trial classification accuracy and the percentage of `near-dipolar' ICA components. Relative to no artifact reduction, ICA-based artifact removal significantly improved SNR and classification accuracy. This was not the case for a regression-based approach to remove EOG artifacts.

  14. Heuristic pattern correction scheme using adaptively trained generalized regression neural networks.

    PubMed

    Hoya, T; Chambers, J A

    2001-01-01

    In many pattern classification problems, an intelligent neural system is required which can learn the newly encountered but misclassified patterns incrementally, while keeping a good classification performance over the past patterns stored in the network. In the paper, an heuristic pattern correction scheme is proposed using adaptively trained generalized regression neural networks (GRNNs). The scheme is based upon both network growing and dual-stage shrinking mechanisms. In the network growing phase, a subset of the misclassified patterns in each incoming data set is iteratively added into the network until all the patterns in the incoming data set are classified correctly. Then, the redundancy in the growing phase is removed in the dual-stage network shrinking. Both long- and short-term memory models are considered in the network shrinking, which are motivated from biological study of the brain. The learning capability of the proposed scheme is investigated through extensive simulation studies.

  15. Automated Detection of Diabetic Retinopathy using Deep Learning.

    PubMed

    Lam, Carson; Yi, Darvin; Guo, Margaret; Lindsey, Tony

    2018-01-01

    Diabetic retinopathy is a leading cause of blindness among working-age adults. Early detection of this condition is critical for good prognosis. In this paper, we demonstrate the use of convolutional neural networks (CNNs) on color fundus images for the recognition task of diabetic retinopathy staging. Our network models achieved test metric performance comparable to baseline literature results, with validation sensitivity of 95%. We additionally explored multinomial classification models, and demonstrate that errors primarily occur in the misclassification of mild disease as normal due to the CNNs inability to detect subtle disease features. We discovered that preprocessing with contrast limited adaptive histogram equalization and ensuring dataset fidelity by expert verification of class labels improves recognition of subtle features. Transfer learning on pretrained GoogLeNet and AlexNet models from ImageNet improved peak test set accuracies to 74.5%, 68.8%, and 57.2% on 2-ary, 3-ary, and 4-ary classification models, respectively.

  16. Classification With Truncated Distance Kernel.

    PubMed

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  17. Clinically orientated classification incorporating shoulder balance for the surgical treatment of adolescent idiopathic scoliosis.

    PubMed

    Elsebaie, H B; Dannawi, Z; Altaf, F; Zaidan, A; Al Mukhtar, M; Shaw, M J; Gibson, A; Noordeen, H

    2016-02-01

    The achievement of shoulder balance is an important measure of successful scoliosis surgery. No previously described classification system has taken shoulder balance into account. We propose a simple classification system for AIS based on two components which include the curve type and shoulder level. Altogether, three curve types have been defined according to the size and location of the curves, each curve pattern is subdivided into type A or B depending on the shoulder level. This classification was tested for interobserver reproducibility and intraobserver reliability. A retrospective analysis of the radiographs of 232 consecutive cases of AIS patients treated surgically between 2005 and 2009 was also performed. Three major types and six subtypes were identified. Type I accounted for 30 %, type II 28 % and type III 42 %. The retrospective analysis showed three patients developed a decompensation that required extension of the fusion. One case developed worsening of shoulder balance requiring further surgery. This classification was tested for interobserver and intraobserver reliability. The mean kappa coefficients for interobserver reproducibility ranged from 0.89 to 0.952, while the mean kappa value for intraobserver reliability was 0.964 indicating a good-to-excellent reliability. The treatment algorithm guides the spinal surgeon to achieve optimal curve correction and postoperative shoulder balance whilst fusing the smallest number of spinal segments. The high interobserver reproducibility and intraobserver reliability makes it an invaluable tool to describe scoliosis curves in everyday clinical practice.

  18. An ensemble predictive modeling framework for breast cancer classification.

    PubMed

    Nagarajan, Radhakrishnan; Upreti, Meenakshi

    2017-12-01

    Molecular changes often precede clinical presentation of diseases and can be useful surrogates with potential to assist in informed clinical decision making. Recent studies have demonstrated the usefulness of modeling approaches such as classification that can predict the clinical outcomes from molecular expression profiles. While useful, a majority of these approaches implicitly use all molecular markers as features in the classification process often resulting in sparse high-dimensional projection of the samples often comparable to that of the sample size. In this study, a variant of the recently proposed ensemble classification approach is used for predicting good and poor-prognosis breast cancer samples from their molecular expression profiles. In contrast to traditional single and ensemble classifiers, the proposed approach uses multiple base classifiers with varying feature sets obtained from two-dimensional projection of the samples in conjunction with a majority voting strategy for predicting the class labels. In contrast to our earlier implementation, base classifiers in the ensembles are chosen based on maximal sensitivity and minimal redundancy by choosing only those with low average cosine distance. The resulting ensemble sets are subsequently modeled as undirected graphs. Performance of four different classification algorithms is shown to be better within the proposed ensemble framework in contrast to using them as traditional single classifier systems. Significance of a subset of genes with high-degree centrality in the network abstractions across the poor-prognosis samples is also discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Comparative Performance Analysis of Support Vector Machine, Random Forest, Logistic Regression and k-Nearest Neighbours in Rainbow Trout (Oncorhynchus Mykiss) Classification Using Image-Based Features

    PubMed Central

    Císař, Petr; Labbé, Laurent; Souček, Pavel; Pelissier, Pablo; Kerneis, Thierry

    2018-01-01

    The main aim of this study was to develop a new objective method for evaluating the impacts of different diets on the live fish skin using image-based features. In total, one-hundred and sixty rainbow trout (Oncorhynchus mykiss) were fed either a fish-meal based diet (80 fish) or a 100% plant-based diet (80 fish) and photographed using consumer-grade digital camera. Twenty-three colour features and four texture features were extracted. Four different classification methods were used to evaluate fish diets including Random forest (RF), Support vector machine (SVM), Logistic regression (LR) and k-Nearest neighbours (k-NN). The SVM with radial based kernel provided the best classifier with correct classification rate (CCR) of 82% and Kappa coefficient of 0.65. Although the both LR and RF methods were less accurate than SVM, they achieved good classification with CCR 75% and 70% respectively. The k-NN was the least accurate (40%) classification model. Overall, it can be concluded that consumer-grade digital cameras could be employed as the fast, accurate and non-invasive sensor for classifying rainbow trout based on their diets. Furthermore, these was a close association between image-based features and fish diet received during cultivation. These procedures can be used as non-invasive, accurate and precise approaches for monitoring fish status during the cultivation by evaluating diet’s effects on fish skin. PMID:29596375

  20. Comparative Performance Analysis of Support Vector Machine, Random Forest, Logistic Regression and k-Nearest Neighbours in Rainbow Trout (Oncorhynchus Mykiss) Classification Using Image-Based Features.

    PubMed

    Saberioon, Mohammadmehdi; Císař, Petr; Labbé, Laurent; Souček, Pavel; Pelissier, Pablo; Kerneis, Thierry

    2018-03-29

    The main aim of this study was to develop a new objective method for evaluating the impacts of different diets on the live fish skin using image-based features. In total, one-hundred and sixty rainbow trout ( Oncorhynchus mykiss ) were fed either a fish-meal based diet (80 fish) or a 100% plant-based diet (80 fish) and photographed using consumer-grade digital camera. Twenty-three colour features and four texture features were extracted. Four different classification methods were used to evaluate fish diets including Random forest (RF), Support vector machine (SVM), Logistic regression (LR) and k -Nearest neighbours ( k -NN). The SVM with radial based kernel provided the best classifier with correct classification rate (CCR) of 82% and Kappa coefficient of 0.65. Although the both LR and RF methods were less accurate than SVM, they achieved good classification with CCR 75% and 70% respectively. The k -NN was the least accurate (40%) classification model. Overall, it can be concluded that consumer-grade digital cameras could be employed as the fast, accurate and non-invasive sensor for classifying rainbow trout based on their diets. Furthermore, these was a close association between image-based features and fish diet received during cultivation. These procedures can be used as non-invasive, accurate and precise approaches for monitoring fish status during the cultivation by evaluating diet's effects on fish skin.

  1. A framework for evaluating complex networks measurements

    NASA Astrophysics Data System (ADS)

    Comin, Cesar H.; Silva, Filipi N.; Costa, Luciano da F.

    2015-06-01

    A good deal of current research in complex networks involves the characterization and/or classification of the topological properties of given structures, which has motivated several respective measurements. This letter proposes a framework for evaluating the quality of complex-network measurements in terms of their effective resolution, degree of degeneracy and discriminability. The potential of the suggested approach is illustrated with respect to comparing the characterization of several model and real-world networks by using concentric and symmetry measurements. The results indicate a markedly superior performance for the latter type of mapping.

  2. Classification of EEG abnormalities in partial epilepsy with simultaneous EEG-fMRI recordings.

    PubMed

    Pedreira, C; Vaudano, A E; Thornton, R C; Chaudhary, U J; Vulliemoz, S; Laufs, H; Rodionov, R; Carmichael, D W; Lhatoo, S D; Guye, M; Quian Quiroga, R; Lemieux, L

    2014-10-01

    Scalp EEG recordings and the classification of interictal epileptiform discharges (IED) in patients with epilepsy provide valuable information about the epileptogenic network, particularly by defining the boundaries of the "irritative zone" (IZ), and hence are helpful during pre-surgical evaluation of patients with severe refractory epilepsies. The current detection and classification of epileptiform signals essentially rely on expert observers. This is a very time-consuming procedure, which also leads to inter-observer variability. Here, we propose a novel approach to automatically classify epileptic activity and show how this method provides critical and reliable information related to the IZ localization beyond the one provided by previous approaches. We applied Wave_clus, an automatic spike sorting algorithm, for the classification of IED visually identified from pre-surgical simultaneous Electroencephalogram-functional Magnetic Resonance Imagining (EEG-fMRI) recordings in 8 patients affected by refractory partial epilepsy candidate for surgery. For each patient, two fMRI analyses were performed: one based on the visual classification and one based on the algorithmic sorting. This novel approach successfully identified a total of 29 IED classes (compared to 26 for visual identification). The general concordance between methods was good, providing a full match of EEG patterns in 2 cases, additional EEG information in 2 other cases and, in general, covering EEG patterns of the same areas as expert classification in 7 of the 8 cases. Most notably, evaluation of the method with EEG-fMRI data analysis showed hemodynamic maps related to the majority of IED classes representing improved performance than the visual IED classification-based analysis (72% versus 50%). Furthermore, the IED-related BOLD changes revealed by using the algorithm were localized within the presumed IZ for a larger number of IED classes (9) in a greater number of patients than the expert classification (7 and 5, respectively). In contrast, in only one case presented the new algorithm resulted in fewer classes and activation areas. We propose that the use of automated spike sorting algorithms to classify IED provides an efficient tool for mapping IED-related fMRI changes and increases the EEG-fMRI clinical value for the pre-surgical assessment of patients with severe epilepsy. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy

    PubMed Central

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system. PMID:27835638

  4. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy.

    PubMed

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.

  5. Effects of gross motor function and manual function levels on performance-based ADL motor skills of children with spastic cerebral palsy.

    PubMed

    Park, Myoung-Ok

    2017-02-01

    [Purpose] The purpose of this study was to determine effects of Gross Motor Function Classification System and Manual Ability Classification System levels on performance-based motor skills of children with spastic cerebral palsy. [Subjects and Methods] Twenty-three children with cerebral palsy were included. The Assessment of Motor and Process Skills was used to evaluate performance-based motor skills in daily life. Gross motor function was assessed using Gross Motor Function Classification Systems, and manual function was measured using the Manual Ability Classification System. [Results] Motor skills in daily activities were significantly different on Gross Motor Function Classification System level and Manual Ability Classification System level. According to the results of multiple regression analysis, children categorized as Gross Motor Function Classification System level III scored lower in terms of performance based motor skills than Gross Motor Function Classification System level I children. Also, when analyzed with respect to Manual Ability Classification System level, level II was lower than level I, and level III was lower than level II in terms of performance based motor skills. [Conclusion] The results of this study indicate that performance-based motor skills differ among children categorized based on Gross Motor Function Classification System and Manual Ability Classification System levels of cerebral palsy.

  6. Microaneurysm detection with radon transform-based classification on retina images.

    PubMed

    Giancardo, L; Meriaudeau, F; Karnowski, T P; Li, Y; Tobin, K W; Chaum, E

    2011-01-01

    The creation of an automatic diabetic retinopathy screening system using retina cameras is currently receiving considerable interest in the medical imaging community. The detection of microaneurysms is a key element in this effort. In this work, we propose a new microaneurysms segmentation technique based on a novel application of the radon transform, which is able to identify these lesions without any previous knowledge of the retina morphological features and with minimal image preprocessing. The algorithm has been evaluated on the Retinopathy Online Challenge public dataset, and its performance compares with the best current techniques. The performance is particularly good at low false positive ratios, which makes it an ideal candidate for diabetic retinopathy screening systems.

  7. Classification of skin cancer images using local binary pattern and SVM classifier

    NASA Astrophysics Data System (ADS)

    Adjed, Faouzi; Faye, Ibrahima; Ababsa, Fakhreddine; Gardezi, Syed Jamal; Dass, Sarat Chandra

    2016-11-01

    In this paper, a classification method for melanoma and non-melanoma skin cancer images has been presented using the local binary patterns (LBP). The LBP computes the local texture information from the skin cancer images, which is later used to compute some statistical features that have capability to discriminate the melanoma and non-melanoma skin tissues. Support vector machine (SVM) is applied on the feature matrix for classification into two skin image classes (malignant and benign). The method achieves good classification accuracy of 76.1% with sensitivity of 75.6% and specificity of 76.7%.

  8. Key-phrase based classification of public health web pages.

    PubMed

    Dolamic, Ljiljana; Boyer, Célia

    2013-01-01

    This paper describes and evaluates the public health web pages classification model based on key phrase extraction and matching. Easily extendible both in terms of new classes as well as the new language this method proves to be a good solution for text classification faced with the total lack of training data. To evaluate the proposed solution we have used a small collection of public health related web pages created by a double blind manual classification. Our experiments have shown that by choosing the adequate threshold value the desired value for either precision or recall can be achieved.

  9. Scattering features for lung cancer detection in fibered confocal fluorescence microscopy images.

    PubMed

    Rakotomamonjy, Alain; Petitjean, Caroline; Salaün, Mathieu; Thiberville, Luc

    2014-06-01

    To assess the feasibility of lung cancer diagnosis using fibered confocal fluorescence microscopy (FCFM) imaging technique and scattering features for pattern recognition. FCFM imaging technique is a new medical imaging technique for which interest has yet to be established for diagnosis. This paper addresses the problem of lung cancer detection using FCFM images and, as a first contribution, assesses the feasibility of computer-aided diagnosis through these images. Towards this aim, we have built a pattern recognition scheme which involves a feature extraction stage and a classification stage. The second contribution relies on the features used for discrimination. Indeed, we have employed the so-called scattering transform for extracting discriminative features, which are robust to small deformations in the images. We have also compared and combined these features with classical yet powerful features like local binary patterns (LBP) and their variants denoted as local quinary patterns (LQP). We show that scattering features yielded to better recognition performances than classical features like LBP and their LQP variants for the FCFM image classification problems. Another finding is that LBP-based and scattering-based features provide complementary discriminative information and, in some situations, we empirically establish that performance can be improved when jointly using LBP, LQP and scattering features. In this work we analyze the joint capability of FCFM images and scattering features for lung cancer diagnosis. The proposed method achieves a good recognition rate for such a diagnosis problem. It also performs well when used in conjunction with other features for other classical medical imaging classification problems. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Structural parameterization and functional prediction of antigenic polypeptome sequences with biological activity through quantitative sequence-activity models (QSAM) by molecular electronegativity edge-distance vector (VMED).

    PubMed

    Li, ZhiLiang; Wu, ShiRong; Chen, ZeCong; Ye, Nancy; Yang, ShengXi; Liao, ChunYang; Zhang, MengJun; Yang, Li; Mei, Hu; Yang, Yan; Zhao, Na; Zhou, Yuan; Zhou, Ping; Xiong, Qing; Xu, Hong; Liu, ShuShen; Ling, ZiHua; Chen, Gang; Li, GenRong

    2007-10-01

    Only from the primary structures of peptides, a new set of descriptors called the molecular electronegativity edge-distance vector (VMED) was proposed and applied to describing and characterizing the molecular structures of oligopeptides and polypeptides, based on the electronegativity of each atom or electronic charge index (ECI) of atomic clusters and the bonding distance between atom-pairs. Here, the molecular structures of antigenic polypeptides were well expressed in order to propose the automated technique for the computerized identification of helper T lymphocyte (Th) epitopes. Furthermore, a modified MED vector was proposed from the primary structures of polypeptides, based on the ECI and the relative bonding distance of the fundamental skeleton groups. The side-chains of each amino acid were here treated as a pseudo-atom. The developed VMED was easy to calculate and able to work. Some quantitative model was established for 28 immunogenic or antigenic polypeptides (AGPP) with 14 (1-14) A(d) and 14 other restricted activities assigned as "1"(+) and "0"(-), respectively. The latter comprised 6 A(b)(15-20), 3 A(k)(21-23), 2 E(k)(24-26), 2 H-2(k)(27 and 28) restricted sequences. Good results were obtained with 90% correct classification (only 2 wrong ones for 20 training samples) and 100% correct prediction (none wrong for 8 testing samples); while contrastively 100% correct classification (none wrong for 20 training samples) and 88% correct classification (1 wrong for 8 testing samples). Both stochastic samplings and cross validations were performed to demonstrate good performance. The described method may also be suitable for estimation and prediction of classes I and II for major histocompatibility antigen (MHC) epitope of human. It will be useful in immune identification and recognition of proteins and genes and in the design and development of subunit vaccines. Several quantitative structure activity relationship (QSAR) models were developed for various oligopeptides and polypeptides including 58 dipeptides and 31 pentapeptides with angiotensin converting enzyme (ACE) inhibition by multiple linear regression (MLR) method. In order to explain the ability to characterize molecular structure of polypeptides, a molecular modeling investigation on QSAR was performed for functional prediction of polypeptide sequences with antigenic activity and heptapeptide sequences with tachykinin activity through quantitative sequence-activity models (QSAMs) by the molecular electronegativity edge-distance vector (VMED). The results showed that VMED exhibited both excellent structural selectivity and good activity prediction. Moreover, the results showed that VMED behaved quite well for both QSAR and QSAM of poly-and oligopeptides, which exhibited both good estimation ability and prediction power, equal to or better than those reported in the previous references. Finally, a preliminary conclusion was drawn: both classical and modified MED vectors were very useful structural descriptors. Some suggestions were proposed for further studies on QSAR/QSAM of proteins in various fields.

  11. The classification of anxiety and hysterical states. Part I. Historical review and empirical delineation.

    PubMed

    Sheehan, D V; Sheehan, K H

    1982-08-01

    The history of the classification of anxiety, hysterical, and hypochondriacal disorders is reviewed. Problems in the ability of current classification schemes to predict, control, and describe the relationship between the symptoms and other phenomena are outlined. Existing classification schemes failed the first test of a good classification model--that of providing categories that are mutually exclusive. The independence of these diagnostic categories from each other does not appear to hold up on empirical testing. In the absence of inherently mutually exclusive categories, further empirical investigation of these classes is obstructed since statistically valid analysis of the nominal data and any useful multivariate analysis would be difficult if not impossible. It is concluded that the existing classifications are unsatisfactory and require some fundamental reconceptualization.

  12. Application of different classification methods for litho-fluid facies prediction: a case study from the offshore Nile Delta

    NASA Astrophysics Data System (ADS)

    Aleardi, Mattia; Ciabarri, Fabio

    2017-10-01

    In this work we test four classification methods for litho-fluid facies identification in a clastic reservoir located in the offshore Nile Delta. The ultimate goal of this study is to find an optimal classification method for the area under examination. The geologic context of the investigated area allows us to consider three different facies in the classification: shales, brine sands and gas sands. The depth at which the reservoir zone is located (2300-2700 m) produces a significant overlap of the P- and S-wave impedances of brine sands and gas sands that makes discrimination between these two litho-fluid classes particularly problematic. The classification is performed on the feature space defined by the elastic properties that are derived from recorded reflection seismic data by means of amplitude versus angle Bayesian inversion. As classification methods we test both deterministic and probabilistic approaches: the quadratic discriminant analysis and the neural network methods belong to the first group, whereas the standard Bayesian approach and the Bayesian approach that includes a 1D Markov chain a priori model to constrain the vertical continuity of litho-fluid facies belong to the second group. The ability of each method to discriminate the different facies is evaluated both on synthetic seismic data (computed on the basis of available borehole information) and on field seismic data. The outcomes of each classification method are compared with the known facies profile derived from well log data and the goodness of the results is quantitatively evaluated using the so-called confusion matrix. The results show that all methods return vertical facies profiles in which the main reservoir zone is correctly identified. However, the consideration of as much prior information as possible in the classification process is the winning choice for deriving a reliable and physically plausible predicted facies profile.

  13. A comprehensive simulation study on classification of RNA-Seq data.

    PubMed

    Zararsız, Gökmen; Goksuluk, Dincer; Korkmaz, Selcuk; Eldem, Vahap; Zararsiz, Gozde Erturk; Duru, Izzet Parug; Ozturk, Ahmet

    2017-01-01

    RNA sequencing (RNA-Seq) is a powerful technique for the gene-expression profiling of organisms that uses the capabilities of next-generation sequencing technologies. Developing gene-expression-based classification algorithms is an emerging powerful method for diagnosis, disease classification and monitoring at molecular level, as well as providing potential markers of diseases. Most of the statistical methods proposed for the classification of gene-expression data are either based on a continuous scale (eg. microarray data) or require a normal distribution assumption. Hence, these methods cannot be directly applied to RNA-Seq data since they violate both data structure and distributional assumptions. However, it is possible to apply these algorithms with appropriate modifications to RNA-Seq data. One way is to develop count-based classifiers, such as Poisson linear discriminant analysis and negative binomial linear discriminant analysis. Another way is to bring the data closer to microarrays and apply microarray-based classifiers. In this study, we compared several classifiers including PLDA with and without power transformation, NBLDA, single SVM, bagging SVM (bagSVM), classification and regression trees (CART), and random forests (RF). We also examined the effect of several parameters such as overdispersion, sample size, number of genes, number of classes, differential-expression rate, and the transformation method on model performances. A comprehensive simulation study is conducted and the results are compared with the results of two miRNA and two mRNA experimental datasets. The results revealed that increasing the sample size, differential-expression rate and decreasing the dispersion parameter and number of groups lead to an increase in classification accuracy. Similar with differential-expression studies, the classification of RNA-Seq data requires careful attention when handling data overdispersion. We conclude that, as a count-based classifier, the power transformed PLDA and, as a microarray-based classifier, vst or rlog transformed RF and SVM classifiers may be a good choice for classification. An R/BIOCONDUCTOR package, MLSeq, is freely available at https://www.bioconductor.org/packages/release/bioc/html/MLSeq.html.

  14. Equivalent Diagnostic Classification Models

    ERIC Educational Resources Information Center

    Maris, Gunter; Bechger, Timo

    2009-01-01

    Rupp and Templin (2008) do a good job at describing the ever expanding landscape of Diagnostic Classification Models (DCM). In many ways, their review article clearly points to some of the questions that need to be answered before DCMs can become part of the psychometric practitioners toolkit. Apart from the issues mentioned in this article that…

  15. Forest site classification for cultural plant harvest by tribal weavers can inform management

    Treesearch

    S. Hummel; F.K. Lake

    2015-01-01

    Do qualitative classifications of ecological conditions for harvesting culturally important forest plants correspond to quantitative differences among sites? To address this question, we blended scientific methods (SEK) and traditional ecological knowledge (TEK) to identify conditions on sites considered good, marginal, or poor for harvesting the leaves of a plant (...

  16. EEG-based driver fatigue detection using hybrid deep generic model.

    PubMed

    Phyo Phyo San; Sai Ho Ling; Rifai Chai; Tran, Yvonne; Craig, Ashley; Hung Nguyen

    2016-08-01

    Classification of electroencephalography (EEG)-based application is one of the important process for biomedical engineering. Driver fatigue is a major case of traffic accidents worldwide and considered as a significant problem in recent decades. In this paper, a hybrid deep generic model (DGM)-based support vector machine is proposed for accurate detection of driver fatigue. Traditionally, a probabilistic DGM with deep architecture is quite good at learning invariant features, but it is not always optimal for classification due to its trainable parameters are in the middle layer. Alternatively, Support Vector Machine (SVM) itself is unable to learn complicated invariance, but produces good decision surface when applied to well-behaved features. Consolidating unsupervised high-level feature extraction techniques, DGM and SVM classification makes the integrated framework stronger and enhance mutually in feature extraction and classification. The experimental results showed that the proposed DBN-based driver fatigue monitoring system achieves better testing accuracy of 73.29 % with 91.10 % sensitivity and 55.48 % specificity. In short, the proposed hybrid DGM-based SVM is an effective method for the detection of driver fatigue in EEG.

  17. Machine learning methods can replace 3D profile method in classification of amyloidogenic hexapeptides.

    PubMed

    Stanislawski, Jerzy; Kotulska, Malgorzata; Unold, Olgierd

    2013-01-17

    Amyloids are proteins capable of forming fibrils. Many of them underlie serious diseases, like Alzheimer disease. The number of amyloid-associated diseases is constantly increasing. Recent studies indicate that amyloidogenic properties can be associated with short segments of aminoacids, which transform the structure when exposed. A few hundreds of such peptides have been experimentally found. Experimental testing of all possible aminoacid combinations is currently not feasible. Instead, they can be predicted by computational methods. 3D profile is a physicochemical-based method that has generated the most numerous dataset - ZipperDB. However, it is computationally very demanding. Here, we show that dataset generation can be accelerated. Two methods to increase the classification efficiency of amyloidogenic candidates are presented and tested: simplified 3D profile generation and machine learning methods. We generated a new dataset of hexapeptides, using more economical 3D profile algorithm, which showed very good classification overlap with ZipperDB (93.5%). The new part of our dataset contains 1779 segments, with 204 classified as amyloidogenic. The dataset of 6-residue sequences with their binary classification, based on the energy of the segment, was applied for training machine learning methods. A separate set of sequences from ZipperDB was used as a test set. The most effective methods were Alternating Decision Tree and Multilayer Perceptron. Both methods obtained area under ROC curve of 0.96, accuracy 91%, true positive rate ca. 78%, and true negative rate 95%. A few other machine learning methods also achieved a good performance. The computational time was reduced from 18-20 CPU-hours (full 3D profile) to 0.5 CPU-hours (simplified 3D profile) to seconds (machine learning). We showed that the simplified profile generation method does not introduce an error with regard to the original method, while increasing the computational efficiency. Our new dataset proved representative enough to use simple statistical methods for testing the amylogenicity based only on six letter sequences. Statistical machine learning methods such as Alternating Decision Tree and Multilayer Perceptron can replace the energy based classifier, with advantage of very significantly reduced computational time and simplicity to perform the analysis. Additionally, a decision tree provides a set of very easily interpretable rules.

  18. Variation in the shape of the tibial insertion site of the anterior cruciate ligament: classification is required.

    PubMed

    Guenther, Daniel; Irarrázaval, Sebastian; Nishizawa, Yuichiro; Vernacchia, Cara; Thorhauer, Eric; Musahl, Volker; Irrgang, James J; Fu, Freddie H

    2017-08-01

    To propose a classification system for the shape of the tibial insertion site (TIS) of the anterior cruciate ligament (ACL) and to demonstrate the intra- and inter-rater agreement of this system. Due to variation in shape and size, different surgical approaches may be feasible to improve reconstruction of the TIS. One hundred patients with a mean age of 26 ± 11 years were included. The ACL was cut arthroscopically at the base of the tibial insertion site. Arthroscopic images were taken from the lateral and medial portal. Images were de-identified and duplicated. Two blinded observers classified the tibial insertion site according to a classification system. The tibial insertion site was classified as type I (elliptical) in 51 knees (51 %), type II (triangular) in 33 knees (33 %) and type III (C-shaped) in 16 knees (16 %). There was good agreement between raters when viewing the insertion site from the lateral portal (κ = 0.65) as well as from the medial portal (κ = 0.66). Intra-rater reliability was good to excellent. Agreement in the description of the insertion site between the medial and lateral portals was good for rater 1 and good for rater 2 (κ = 0.74 and 0.77, respectively). There is variation in the shape of the ACL TIS. The classification system is a repeatable and reliable tool to summarize the shape of the TIS using three common patterns. For clinical relevance, different shapes may require different types of reconstruction to ensure proper footprint restoration. Consideration of the individual TIS shape is required to prevent iatrogenic damage of adjacent structures like the menisci. III.

  19. FINAL ECOSYSTEM GOODS AND SERVICES CLASSIFICATION SYSTEM (FEGS-CS)

    EPA Science Inventory

    This document defines and classifies 338 Final Ecosystem Goods and Services (FEGS), each defined and uniquely numbered by a combination of environmental class or sub-class and a beneficiary category or sub-category. The introductory section provides the rationale and conceptual ...

  20. Performance of a Machine Learning Classifier of Knee MRI Reports in Two Large Academic Radiology Practices: A Tool to Estimate Diagnostic Yield.

    PubMed

    Hassanpour, Saeed; Langlotz, Curtis P; Amrhein, Timothy J; Befera, Nicholas T; Lungren, Matthew P

    2017-04-01

    The purpose of this study is to evaluate the performance of a natural language processing (NLP) system in classifying a database of free-text knee MRI reports at two separate academic radiology practices. An NLP system that uses terms and patterns in manually classified narrative knee MRI reports was constructed. The NLP system was trained and tested on expert-classified knee MRI reports from two major health care organizations. Radiology reports were modeled in the training set as vectors, and a support vector machine framework was used to train the classifier. A separate test set from each organization was used to evaluate the performance of the system. We evaluated the performance of the system both within and across organizations. Standard evaluation metrics, such as accuracy, precision, recall, and F1 score (i.e., the weighted average of the precision and recall), and their respective 95% CIs were used to measure the efficacy of our classification system. The accuracy for radiology reports that belonged to the model's clinically significant concept classes after training data from the same institution was good, yielding an F1 score greater than 90% (95% CI, 84.6-97.3%). Performance of the classifier on cross-institutional application without institution-specific training data yielded F1 scores of 77.6% (95% CI, 69.5-85.7%) and 90.2% (95% CI, 84.5-95.9%) at the two organizations studied. The results show excellent accuracy by the NLP machine learning classifier in classifying free-text knee MRI reports, supporting the institution-independent reproducibility of knee MRI report classification. Furthermore, the machine learning classifier performed well on free-text knee MRI reports from another institution. These data support the feasibility of multiinstitutional classification of radiologic imaging text reports with a single machine learning classifier without requiring institution-specific training data.

  1. Performance evaluation of MLP and RBF feed forward neural network for the recognition of off-line handwritten characters

    NASA Astrophysics Data System (ADS)

    Rishi, Rahul; Choudhary, Amit; Singh, Ravinder; Dhaka, Vijaypal Singh; Ahlawat, Savita; Rao, Mukta

    2010-02-01

    In this paper we propose a system for classification problem of handwritten text. The system is composed of preprocessing module, supervised learning module and recognition module on a very broad level. The preprocessing module digitizes the documents and extracts features (tangent values) for each character. The radial basis function network is used in the learning and recognition modules. The objective is to analyze and improve the performance of Multi Layer Perceptron (MLP) using RBF transfer functions over Logarithmic Sigmoid Function. The results of 35 experiments indicate that the Feed Forward MLP performs accurately and exhaustively with RBF. With the change in weight update mechanism and feature-drawn preprocessing module, the proposed system is competent with good recognition show.

  2. Automatic T1 bladder tumor detection by using wavelet analysis in cystoscopy images

    NASA Astrophysics Data System (ADS)

    Freitas, Nuno R.; Vieira, Pedro M.; Lima, Estevão; Lima, Carlos S.

    2018-02-01

    Correct classification of cystoscopy images depends on the interpreter’s experience. Bladder cancer is a common lesion that can only be confirmed by biopsying the tissue, therefore, the automatic identification of tumors plays a significant role in early stage diagnosis and its accuracy. To our best knowledge, the use of white light cystoscopy images for bladder tumor diagnosis has not been reported so far. In this paper, a texture analysis based approach is proposed for bladder tumor diagnosis presuming that tumors change in tissue texture. As is well accepted by the scientific community, texture information is more present in the medium to high frequency range which can be selected by using a discrete wavelet transform (DWT). Tumor enhancement can be improved by using automatic segmentation, since a mixing with normal tissue is avoided under ideal conditions. The segmentation module proposed in this paper takes advantage of the wavelet decomposition tree to discard poor texture information in such a way that both steps of the proposed algorithm segmentation and classification share the same focus on texture. Multilayer perceptron and a support vector machine with a stratified ten-fold cross-validation procedure were used for classification purposes by using the hue-saturation-value (HSV), red-green-blue, and CIELab color spaces. Performances of 91% in sensitivity and 92.9% in specificity were obtained regarding HSV color by using both preprocessing and classification steps based on the DWT. The proposed method can achieve good performance on identifying bladder tumor frames. These promising results open the path towards a deeper study regarding the applicability of this algorithm in computer aided diagnosis.

  3. Classifying Lower Extremity Muscle Fatigue during Walking using Machine Learning and Inertial Sensors

    PubMed Central

    Zhang, Jian; Lockhart, Thurmon E.; Soangra, Rahul

    2013-01-01

    Fatigue in lower extremity musculature is associated with decline in postural stability, motor performance and alters normal walking patterns in human subjects. Automated recognition of lower extremity muscle fatigue condition may be advantageous in early detection of fall and injury risks. Supervised machine learning methods such as Support Vector Machines (SVM) have been previously used for classifying healthy and pathological gait patterns and also for separating old and young gait patterns. In this study we explore the classification potential of SVM in recognition of gait patterns utilizing an inertial measurement unit associated with lower extremity muscular fatigue. Both kinematic and kinetic gait patterns of 17 participants (29±11 years) were recorded and analyzed in normal and fatigued state of walking. Lower extremities were fatigued by performance of a squatting exercise until the participants reached 60% of their baseline maximal voluntary exertion level. Feature selection methods were used to classify fatigue and no-fatigue conditions based on temporal and frequency information of the signals. Additionally, influences of three different kernel schemes (i.e., linear, polynomial, and radial basis function) were investigated for SVM classification. The results indicated that lower extremity muscle fatigue condition influenced gait and loading responses. In terms of the SVM classification results, an accuracy of 96% was reached in distinguishing the two gait patterns (fatigue and no-fatigue) within the same subject using the kinematic, time and frequency domain features. It is also found that linear kernel and RBF kernel were equally good to identify intra-individual fatigue characteristics. These results suggest that intra-subject fatigue classification using gait patterns from an inertial sensor holds considerable potential in identifying “at-risk” gait due to muscle fatigue. PMID:24081829

  4. Vector quantizer designs for joint compression and terrain categorization of multispectral imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Lyons, Daniel F.

    1994-01-01

    Two vector quantizer designs for compression of multispectral imagery and their impact on terrain categorization performance are evaluated. The mean-squared error (MSE) and classification performance of the two quantizers are compared, and it is shown that a simple two-stage design minimizing MSE subject to a constraint on classification performance has a significantly better classification performance than a standard MSE-based tree-structured vector quantizer followed by maximum likelihood classification. This improvement in classification performance is obtained with minimal loss in MSE performance. The results show that it is advantageous to tailor compression algorithm designs to the required data exploitation tasks. Applications of joint compression/classification include compression for the archival or transmission of Landsat imagery that is later used for land utility surveys and/or radiometric analysis.

  5. Automatic photointerpretation for land use management in Minnesota

    NASA Technical Reports Server (NTRS)

    Swanlund, G. D. (Principal Investigator); Pile, D. R.

    1973-01-01

    The author has identified the following significant results. The Minnesota Iron Range area was selected as one of the land use areas to be evaluated. Six classes were selected: (1) hardwood; (2) conifer; (3) water (including in mines); (4) mines, tailings and wet areas; (5) open area; and (6) urban. Initial classification results show a correct classification of 70.1 to 95.4% for the six classes. This is extremely good. It can be further improved since there were some incorrect classifications in the ground truth.

  6. Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) processing speed scores as measures of noncredible responding: The third generation of embedded performance validity indicators.

    PubMed

    Erdodi, Laszlo A; Abeare, Christopher A; Lichtenstein, Jonathan D; Tyson, Bradley T; Kucharski, Brittany; Zuccato, Brandon G; Roth, Robert M

    2017-02-01

    Research suggests that select processing speed measures can also serve as embedded validity indicators (EVIs). The present study examined the diagnostic utility of Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) subtests as EVIs in a mixed clinical sample of 205 patients medically referred for neuropsychological assessment (53.3% female, mean age = 45.1). Classification accuracy was calculated against 3 composite measures of performance validity as criterion variables. A PSI ≤79 produced a good combination of sensitivity (.23-.56) and specificity (.92-.98). A Coding scaled score ≤5 resulted in good specificity (.94-1.00), but low and variable sensitivity (.04-.28). A Symbol Search scaled score ≤6 achieved a good balance between sensitivity (.38-.64) and specificity (.88-.93). A Coding-Symbol Search scaled score difference ≥5 produced adequate specificity (.89-.91) but consistently low sensitivity (.08-.12). A 2-tailed cutoff on the Coding/Symbol Search raw score ratio (≤1.41 or ≥3.57) produced acceptable specificity (.87-.93), but low sensitivity (.15-.24). Failing ≥2 of these EVIs produced variable specificity (.81-.93) and sensitivity (.31-.59). Failing ≥3 of these EVIs stabilized specificity (.89-.94) at a small cost to sensitivity (.23-.53). Results suggest that processing speed based EVIs have the potential to provide a cost-effective and expedient method for evaluating the validity of cognitive data. Given their generally low and variable sensitivity, however, they should not be used in isolation to determine the credibility of a given response set. They also produced unacceptably high rates of false positive errors in patients with moderate-to-severe head injury. Combining evidence from multiple EVIs has the potential to improve overall classification accuracy. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Validation of the Lung Subtyping Panel in Multiple Fresh-Frozen and Formalin-Fixed, Paraffin-Embedded Lung Tumor Gene Expression Data Sets.

    PubMed

    Faruki, Hawazin; Mayhew, Gregory M; Fan, Cheng; Wilkerson, Matthew D; Parker, Scott; Kam-Morgan, Lauren; Eisenberg, Marcia; Horten, Bruce; Hayes, D Neil; Perou, Charles M; Lai-Goldman, Myla

    2016-06-01

    Context .- A histologic classification of lung cancer subtypes is essential in guiding therapeutic management. Objective .- To complement morphology-based classification of lung tumors, a previously developed lung subtyping panel (LSP) of 57 genes was tested using multiple public fresh-frozen gene-expression data sets and a prospectively collected set of formalin-fixed, paraffin-embedded lung tumor samples. Design .- The LSP gene-expression signature was evaluated in multiple lung cancer gene-expression data sets totaling 2177 patients collected from 4 platforms: Illumina RNAseq (San Diego, California), Agilent (Santa Clara, California) and Affymetrix (Santa Clara) microarrays, and quantitative reverse transcription-polymerase chain reaction. Gene centroids were calculated for each of 3 genomic-defined subtypes: adenocarcinoma, squamous cell carcinoma, and neuroendocrine, the latter of which encompassed both small cell carcinoma and carcinoid. Classification by LSP into 3 subtypes was evaluated in both fresh-frozen and formalin-fixed, paraffin-embedded tumor samples, and agreement with the original morphology-based diagnosis was determined. Results .- The LSP-based classifications demonstrated overall agreement with the original clinical diagnosis ranging from 78% (251 of 322) to 91% (492 of 538 and 869 of 951) in the fresh-frozen public data sets and 84% (65 of 77) in the formalin-fixed, paraffin-embedded data set. The LSP performance was independent of tissue-preservation method and gene-expression platform. Secondary, blinded pathology review of formalin-fixed, paraffin-embedded samples demonstrated concordance of 82% (63 of 77) with the original morphology diagnosis. Conclusions .- The LSP gene-expression signature is a reproducible and objective method for classifying lung tumors and demonstrates good concordance with morphology-based classification across multiple data sets. The LSP panel can supplement morphologic assessment of lung cancers, particularly when classification by standard methods is challenging.

  8. Mining protein function from text using term-based support vector machines

    PubMed Central

    Rice, Simon B; Nenadic, Goran; Stapley, Benjamin J

    2005-01-01

    Background Text mining has spurred huge interest in the domain of biology. The goal of the BioCreAtIvE exercise was to evaluate the performance of current text mining systems. We participated in Task 2, which addressed assigning Gene Ontology terms to human proteins and selecting relevant evidence from full-text documents. We approached it as a modified form of the document classification task. We used a supervised machine-learning approach (based on support vector machines) to assign protein function and select passages that support the assignments. As classification features, we used a protein's co-occurring terms that were automatically extracted from documents. Results The results evaluated by curators were modest, and quite variable for different problems: in many cases we have relatively good assignment of GO terms to proteins, but the selected supporting text was typically non-relevant (precision spanning from 3% to 50%). The method appears to work best when a substantial set of relevant documents is obtained, while it works poorly on single documents and/or short passages. The initial results suggest that our approach can also mine annotations from text even when an explicit statement relating a protein to a GO term is absent. Conclusion A machine learning approach to mining protein function predictions from text can yield good performance only if sufficient training data is available, and significant amount of supporting data is used for prediction. The most promising results are for combined document retrieval and GO term assignment, which calls for the integration of methods developed in BioCreAtIvE Task 1 and Task 2. PMID:15960835

  9. Classification methods to detect sleep apnea in adults based on respiratory and oximetry signals: a systematic review.

    PubMed

    Uddin, M B; Chow, C M; Su, S W

    2018-03-26

    Sleep apnea (SA), a common sleep disorder, can significantly decrease the quality of life, and is closely associated with major health risks such as cardiovascular disease, sudden death, depression, and hypertension. The normal diagnostic process of SA using polysomnography is costly and time consuming. In addition, the accuracy of different classification methods to detect SA varies with the use of different physiological signals. If an effective, reliable, and accurate classification method is developed, then the diagnosis of SA and its associated treatment will be time-efficient and economical. This study aims to systematically review the literature and present an overview of classification methods to detect SA using respiratory and oximetry signals and address the automated detection approach. Sixty-two included studies revealed the application of single and multiple signals (respiratory and oximetry) for the diagnosis of SA. Both airflow and oxygen saturation signals alone were effective in detecting SA in the case of binary decision-making, whereas multiple signals were good for multi-class detection. In addition, some machine learning methods were superior to the other classification methods for SA detection using respiratory and oximetry signals. To deal with the respiratory and oximetry signals, a good choice of classification method as well as the consideration of associated factors would result in high accuracy in the detection of SA. An accurate classification method should provide a high detection rate with an automated (independent of human action) analysis of respiratory and oximetry signals. Future high-quality automated studies using large samples of data from multiple patient groups or record batches are recommended.

  10. Evaluation of linear discriminant analysis for automated Raman histological mapping of esophageal high-grade dysplasia

    NASA Astrophysics Data System (ADS)

    Hutchings, Joanne; Kendall, Catherine; Shepherd, Neil; Barr, Hugh; Stone, Nicholas

    2010-11-01

    Rapid Raman mapping has the potential to be used for automated histopathology diagnosis, providing an adjunct technique to histology diagnosis. The aim of this work is to evaluate the feasibility of automated and objective pathology classification of Raman maps using linear discriminant analysis. Raman maps of esophageal tissue sections are acquired. Principal component (PC)-fed linear discriminant analysis (LDA) is carried out using subsets of the Raman map data (6483 spectra). An overall (validated) training classification model performance of 97.7% (sensitivity 95.0 to 100% and specificity 98.6 to 100%) is obtained. The remainder of the map spectra (131,672 spectra) are projected onto the classification model resulting in Raman images, demonstrating good correlation with contiguous hematoxylin and eosin (HE) sections. Initial results suggest that LDA has the potential to automate pathology diagnosis of esophageal Raman images, but since the classification of test spectra is forced into existing training groups, further work is required to optimize the training model. A small pixel size is advantageous for developing the training datasets using mapping data, despite lengthy mapping times, due to additional morphological information gained, and could facilitate differentiation of further tissue groups, such as the basal cells/lamina propria, in the future, but larger pixels sizes (and faster mapping) may be more feasible for clinical application.

  11. Categorization abilities for emotional and nonemotional stimuli in patients with alcohol-related Korsakoff syndrome.

    PubMed

    Labudda, Kirsten; von Rothkirch, Nadine; Pawlikowski, Mirko; Laier, Christian; Brand, Matthias

    2010-06-01

    To investigate whether patients with alcohol-related Korsakoff syndrome (KR) have emotion-specific or general deficits in multicategoric classification performance. Earlier studies have shown reduced performance in classifying stimuli according to their emotional valence in patients with KS. However, it is unclear whether such classification deficits are of emotion-specific nature or whether they can also occur when nonemotional classifications are demanded. In this study, we examined 35 patients with alcoholic KS and 35 healthy participants with the Emotional Picture Task (EPT) to assess valence classification performance, the Semantic Classification Task (SCT) to assess nonemotional categorizations, and an extensive neuropsychologic test battery. KS patients exhibited lower classification performance in both tasks compared with the healthy participants. EPT and SCT performance were related to each other. EPT and SCT performance correlated with general knowledge and EPT performance in addition with executive functions. Our results indicate a common underlying mechanism of the patients' reductions in emotional and nonemotional classification performance. These deficits are most probably based on problems in retrieving object and category knowledge and, partially, on executive functioning.

  12. Deriving exposure limits

    NASA Astrophysics Data System (ADS)

    Sliney, David H.

    1990-07-01

    Historically many different agencies and standards organizations have proposed laser occupational exposure limits (EL1s) or maximum permissible exposure (MPE) levels. Although some safety standards have been limited in scope to manufacturer system safety performance standards or to codes of practice most have included occupational EL''s. Initially in the 1960''s attention was drawn to setting EL''s however as greater experience accumulated in the use of lasers and some accident experience had been gained safety procedures were developed. It became clear by 1971 after the first decade of laser use that detailed hazard evaluation of each laser environment was too complex for most users and a scheme of hazard classification evolved. Today most countries follow a scheme of four major hazard classifications as defined in Document WS 825 of the International Electrotechnical Commission (IEC). The classifications and the associated accessible emission limits (AEL''s) were based upon the EL''s. The EL and AEL values today are in surprisingly good agreement worldwide. There exists a greater range of safety requirements for the user for each class of laser. The current MPE''s (i. e. EL''s) and their basis are highlighted in this presentation. 2. 0

  13. Modeling Habitat Suitability of Migratory Birds from Remote Sensing Images Using Convolutional Neural Networks.

    PubMed

    Su, Jin-He; Piao, Ying-Chao; Luo, Ze; Yan, Bao-Ping

    2018-04-26

    With the application of various data acquisition devices, a large number of animal movement data can be used to label presence data in remote sensing images and predict species distribution. In this paper, a two-stage classification approach for combining movement data and moderate-resolution remote sensing images was proposed. First, we introduced a new density-based clustering method to identify stopovers from migratory birds’ movement data and generated classification samples based on the clustering result. We split the remote sensing images into 16 × 16 patches and labeled them as positive samples if they have overlap with stopovers. Second, a multi-convolution neural network model is proposed for extracting the features from temperature data and remote sensing images, respectively. Then a Support Vector Machines (SVM) model was used to combine the features together and predict classification results eventually. The experimental analysis was carried out on public Landsat 5 TM images and a GPS dataset was collected on 29 birds over three years. The results indicated that our proposed method outperforms the existing baseline methods and was able to achieve good performance in habitat suitability prediction.

  14. An iterated Laplacian based semi-supervised dimensionality reduction for classification of breast cancer on ultrasound images.

    PubMed

    Liu, Xiao; Shi, Jun; Zhou, Shichong; Lu, Minhua

    2014-01-01

    The dimensionality reduction is an important step in ultrasound image based computer-aided diagnosis (CAD) for breast cancer. A newly proposed l2,1 regularized correntropy algorithm for robust feature selection (CRFS) has achieved good performance for noise corrupted data. Therefore, it has the potential to reduce the dimensions of ultrasound image features. However, in clinical practice, the collection of labeled instances is usually expensive and time costing, while it is relatively easy to acquire the unlabeled or undetermined instances. Therefore, the semi-supervised learning is very suitable for clinical CAD. The iterated Laplacian regularization (Iter-LR) is a new regularization method, which has been proved to outperform the traditional graph Laplacian regularization in semi-supervised classification and ranking. In this study, to augment the classification accuracy of the breast ultrasound CAD based on texture feature, we propose an Iter-LR-based semi-supervised CRFS (Iter-LR-CRFS) algorithm, and then apply it to reduce the feature dimensions of ultrasound images for breast CAD. We compared the Iter-LR-CRFS with LR-CRFS, original supervised CRFS, and principal component analysis. The experimental results indicate that the proposed Iter-LR-CRFS significantly outperforms all other algorithms.

  15. Biometric sample extraction using Mahalanobis distance in Cardioid based graph using electrocardiogram signals.

    PubMed

    Sidek, Khairul; Khali, Ibrahim

    2012-01-01

    In this paper, a person identification mechanism implemented with Cardioid based graph using electrocardiogram (ECG) is presented. Cardioid based graph has given a reasonably good classification accuracy in terms of differentiating between individuals. However, the current feature extraction method using Euclidean distance could be further improved by using Mahalanobis distance measurement producing extracted coefficients which takes into account the correlations of the data set. Identification is then done by applying these extracted features to Radial Basis Function Network. A total of 30 ECG data from MITBIH Normal Sinus Rhythm database (NSRDB) and MITBIH Arrhythmia database (MITDB) were used for development and evaluation purposes. Our experimentation results suggest that the proposed feature extraction method has significantly increased the classification performance of subjects in both databases with accuracy from 97.50% to 99.80% in NSRDB and 96.50% to 99.40% in MITDB. High sensitivity, specificity and positive predictive value of 99.17%, 99.91% and 99.23% for NSRDB and 99.30%, 99.90% and 99.40% for MITDB also validates the proposed method. This result also indicates that the right feature extraction technique plays a vital role in determining the persistency of the classification accuracy for Cardioid based person identification mechanism.

  16. Chandra stacking analysis of CANDELS galaxies at z>1.5

    NASA Astrophysics Data System (ADS)

    Civano, Francesca

    2016-09-01

    The goal of this proposal is to study the X-ray emission of non-X-ray detected galaxies at z>1.5, beyond the peak of stellar and nuclear activity, in combination with galaxy global properties, such as stellar mass and star formation activity and their morphological classification. To achieve this goal, we will select galaxies in CANDELS. Making use of the 5 X-ray surveys with different depths (160 ks for COSMOS, 800 ks for AEGIS-XD and X-UDS, 2 Ms for GOODS-N and 4 (8) Ms GOODS-S) available in these famous fields, we will be able to reach X-ray luminosities where stellar emission dominate the nuclear one. This analysis will extend to z>1.5, the results obtained performing stacking analysis solely using the Chandra COSMOS Legacy Survey at lower redshift.

  17. Applications of color machine vision in the agricultural and food industries

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Ludas, Laszlo I.; Morgan, Mark T.; Krutz, Gary W.; Precetti, Cyrille J.

    1999-01-01

    Color is an important factor in Agricultural and the Food Industry. Agricultural or prepared food products are often grade by producers and consumers using color parameters. Color is used to estimate maturity, sort produce for defects, but also perform genetic screenings or make an aesthetic judgement. The task of sorting produce following a color scale is very complex, requires special illumination and training. Also, this task cannot be performed for long durations without fatigue and loss of accuracy. This paper describes a machine vision system designed to perform color classification in real-time. Applications for sorting a variety of agricultural products are included: e.g. seeds, meat, baked goods, plant and wood.FIrst the theory of color classification of agricultural and biological materials is introduced. Then, some tools for classifier development are presented. Finally, the implementation of the algorithm on real-time image processing hardware and example applications for industry is described. This paper also presented an image analysis algorithm and a prototype machine vision system which was developed for industry. This system will automatically locate the surface of some plants using digital camera and predict information such as size, potential value and type of this plant. The algorithm developed will be feasible for real-time identification in an industrial environment.

  18. Acoustic surface perception from naturally occurring step sounds of a dexterous hexapod robot

    NASA Astrophysics Data System (ADS)

    Cuneyitoglu Ozkul, Mine; Saranli, Afsar; Yazicioglu, Yigit

    2013-10-01

    Legged robots that exhibit dynamic dexterity naturally interact with the surface to generate complex acoustic signals carrying rich information on the surface as well as the robot platform itself. However, the nature of a legged robot, which is a complex, hybrid dynamic system, renders the more common approach of model-based system identification impractical. The present paper focuses on acoustic surface identification and proposes a non-model-based analysis and classification approach adopted from the speech processing literature. A novel feature set composed of spectral band energies augmented by their vector time derivatives and time-domain averaged zero crossing rate is proposed. Using a multi-dimensional vector classifier, these features carry enough information to accurately classify a range of commonly occurring indoor and outdoor surfaces without using of any mechanical system model. A comparative experimental study is carried out and classification performance and computational complexity are characterized. Different feature combinations, classifiers and changes in critical design parameters are investigated. A realistic and representative acoustic data set is collected with the robot moving at different speeds on a number of surfaces. The study demonstrates promising performance of this non-model-based approach, even in an acoustically uncontrolled environment. The approach also has good chance of performing in real-time.

  19. 10 CFR 1045.39 - Challenging classification and declassification determinations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... holder of an RD or FRD document who, in good faith, believes that the RD or FRD document has an improper... classified the document. (b) Agencies shall establish procedures under which authorized holders of RD and FRD... involving RD or FRD may be appealed to the Director of Classification. In the case of FRD and RD related...

  20. 10 CFR 1045.39 - Challenging classification and declassification determinations.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... holder of an RD or FRD document who, in good faith, believes that the RD or FRD document has an improper... classified the document. (b) Agencies shall establish procedures under which authorized holders of RD and FRD... involving RD or FRD may be appealed to the Director of Classification. In the case of FRD and RD related...

  1. 10 CFR 1045.39 - Challenging classification and declassification determinations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... holder of an RD or FRD document who, in good faith, believes that the RD or FRD document has an improper... classified the document. (b) Agencies shall establish procedures under which authorized holders of RD and FRD... involving RD or FRD may be appealed to the Director of Classification. In the case of FRD and RD related...

  2. Normal tissue complication probability (NTCP) modelling using spatial dose metrics and machine learning methods for severe acute oral mucositis resulting from head and neck radiotherapy.

    PubMed

    Dean, Jamie A; Wong, Kee H; Welsh, Liam C; Jones, Ann-Britt; Schick, Ulrike; Newbold, Kate L; Bhide, Shreerang A; Harrington, Kevin J; Nutting, Christopher M; Gulliford, Sarah L

    2016-07-01

    Severe acute mucositis commonly results from head and neck (chemo)radiotherapy. A predictive model of mucositis could guide clinical decision-making and inform treatment planning. We aimed to generate such a model using spatial dose metrics and machine learning. Predictive models of severe acute mucositis were generated using radiotherapy dose (dose-volume and spatial dose metrics) and clinical data. Penalised logistic regression, support vector classification and random forest classification (RFC) models were generated and compared. Internal validation was performed (with 100-iteration cross-validation), using multiple metrics, including area under the receiver operating characteristic curve (AUC) and calibration slope, to assess performance. Associations between covariates and severe mucositis were explored using the models. The dose-volume-based models (standard) performed equally to those incorporating spatial information. Discrimination was similar between models, but the RFCstandard had the best calibration. The mean AUC and calibration slope for this model were 0.71 (s.d.=0.09) and 3.9 (s.d.=2.2), respectively. The volumes of oral cavity receiving intermediate and high doses were associated with severe mucositis. The RFCstandard model performance is modest-to-good, but should be improved, and requires external validation. Reducing the volumes of oral cavity receiving intermediate and high doses may reduce mucositis incidence. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.

  3. McTwo: a two-step feature selection algorithm based on maximal information coefficient.

    PubMed

    Ge, Ruiquan; Zhou, Manli; Luo, Youxi; Meng, Qinghan; Mai, Guoqin; Ma, Dongli; Wang, Guoqing; Zhou, Fengfeng

    2016-03-23

    High-throughput bio-OMIC technologies are producing high-dimension data from bio-samples at an ever increasing rate, whereas the training sample number in a traditional experiment remains small due to various difficulties. This "large p, small n" paradigm in the area of biomedical "big data" may be at least partly solved by feature selection algorithms, which select only features significantly associated with phenotypes. Feature selection is an NP-hard problem. Due to the exponentially increased time requirement for finding the globally optimal solution, all the existing feature selection algorithms employ heuristic rules to find locally optimal solutions, and their solutions achieve different performances on different datasets. This work describes a feature selection algorithm based on a recently published correlation measurement, Maximal Information Coefficient (MIC). The proposed algorithm, McTwo, aims to select features associated with phenotypes, independently of each other, and achieving high classification performance of the nearest neighbor algorithm. Based on the comparative study of 17 datasets, McTwo performs about as well as or better than existing algorithms, with significantly reduced numbers of selected features. The features selected by McTwo also appear to have particular biomedical relevance to the phenotypes from the literature. McTwo selects a feature subset with very good classification performance, as well as a small feature number. So McTwo may represent a complementary feature selection algorithm for the high-dimensional biomedical datasets.

  4. Computer-aided diagnosis with textural features for breast lesions in sonograms.

    PubMed

    Chen, Dar-Ren; Huang, Yu-Len; Lin, Sheng-Hsiung

    2011-04-01

    Computer-aided diagnosis (CAD) systems provided second beneficial support reference and enhance the diagnostic accuracy. This paper was aimed to develop and evaluate a CAD with texture analysis in the classification of breast tumors for ultrasound images. The ultrasound (US) dataset evaluated in this study composed of 1020 sonograms of region of interest (ROI) subimages from 255 patients. Two-view sonogram (longitudinal and transverse views) and four different rectangular regions were utilized to analyze each tumor. Six practical textural features from the US images were performed to classify breast tumors as benign or malignant. However, the textural features always perform as a high dimensional vector; high dimensional vector is unfavorable to differentiate breast tumors in practice. The principal component analysis (PCA) was used to reduce the dimension of textural feature vector and then the image retrieval technique was performed to differentiate between benign and malignant tumors. In the experiments, all the cases were sampled with k-fold cross-validation (k=10) to evaluate the performance with receiver operating characteristic (ROC) curve. The area (A(Z)) under the ROC curve for the proposed CAD system with the specific textural features was 0.925±0.019. The classification ability for breast tumor with textural information is satisfactory. This system differentiates benign from malignant breast tumors with a good result and is therefore clinically useful to provide a second opinion. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. "When 'Bad' is 'Good'": Identifying Personal Communication and Sentiment in Drug-Related Tweets.

    PubMed

    Daniulaityte, Raminta; Chen, Lu; Lamy, Francois R; Carlson, Robert G; Thirunarayan, Krishnaprasad; Sheth, Amit

    2016-10-24

    To harness the full potential of social media for epidemiological surveillance of drug abuse trends, the field needs a greater level of automation in processing and analyzing social media content. The objective of the study is to describe the development of supervised machine-learning techniques for the eDrugTrends platform to automatically classify tweets by type/source of communication (personal, official/media, retail) and sentiment (positive, negative, neutral) expressed in cannabis- and synthetic cannabinoid-related tweets. Tweets were collected using Twitter streaming Application Programming Interface and filtered through the eDrugTrends platform using keywords related to cannabis, marijuana edibles, marijuana concentrates, and synthetic cannabinoids. After creating coding rules and assessing intercoder reliability, a manually labeled data set (N=4000) was developed by coding several batches of randomly selected subsets of tweets extracted from the pool of 15,623,869 collected by eDrugTrends (May-November 2015). Out of 4000 tweets, 25% (1000/4000) were used to build source classifiers and 75% (3000/4000) were used for sentiment classifiers. Logistic Regression (LR), Naive Bayes (NB), and Support Vector Machines (SVM) were used to train the classifiers. Source classification (n=1000) tested Approach 1 that used short URLs, and Approach 2 where URLs were expanded and included into the bag-of-words analysis. For sentiment classification, Approach 1 used all tweets, regardless of their source/type (n=3000), while Approach 2 applied sentiment classification to personal communication tweets only (2633/3000, 88%). Multiclass and binary classification tasks were examined, and machine-learning sentiment classifier performance was compared with Valence Aware Dictionary for sEntiment Reasoning (VADER), a lexicon and rule-based method. The performance of each classifier was assessed using 5-fold cross validation that calculated average F-scores. One-tailed t test was used to determine if differences in F-scores were statistically significant. In multiclass source classification, the use of expanded URLs did not contribute to significant improvement in classifier performance (0.7972 vs 0.8102 for SVM, P=.19). In binary classification, the identification of all source categories improved significantly when unshortened URLs were used, with personal communication tweets benefiting the most (0.8736 vs 0.8200, P<.001). In multiclass sentiment classification Approach 1, SVM (0.6723) performed similarly to NB (0.6683) and LR (0.6703). In Approach 2, SVM (0.7062) did not differ from NB (0.6980, P=.13) or LR (F=0.6931, P=.05), but it was over 40% more accurate than VADER (F=0.5030, P<.001). In multiclass task, improvements in sentiment classification (Approach 2 vs Approach 1) did not reach statistical significance (eg, SVM: 0.7062 vs 0.6723, P=.052). In binary sentiment classification (positive vs negative), Approach 2 (focus on personal communication tweets only) improved classification results, compared with Approach 1, for LR (0.8752 vs 0.8516, P=.04) and SVM (0.8800 vs 0.8557, P=.045). The study provides an example of the use of supervised machine learning methods to categorize cannabis- and synthetic cannabinoid-related tweets with fairly high accuracy. Use of these content analysis tools along with geographic identification capabilities developed by the eDrugTrends platform will provide powerful methods for tracking regional changes in user opinions related to cannabis and synthetic cannabinoids use over time and across different regions.

  6. Bulk Magnetization Effects in EMI-Based Classification and Discrimination

    DTIC Science & Technology

    2012-04-01

    response adds to classification performance and ( 2 ) develop a comprehensive understanding of the engineering challenges of primary field cancellation...response adds to classification performance and ( 2 ) develop a comprehensive understanding of the engineering challenges of primary field cancellation...classification performance and ( 2 ) develop a comprehensive understanding of the engineering challenges of primary field cancellation that can support a

  7. A machine learning approach for classification of anatomical coverage in CT

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoyong; Lo, Pechin; Ramakrishna, Bharath; Goldin, Johnathan; Brown, Matthew

    2016-03-01

    Automatic classification of anatomical coverage of medical images is critical for big data mining and as a pre-processing step to automatically trigger specific computer aided diagnosis systems. The traditional way to identify scans through DICOM headers has various limitations due to manual entry of series descriptions and non-standardized naming conventions. In this study, we present a machine learning approach where multiple binary classifiers were used to classify different anatomical coverages of CT scans. A one-vs-rest strategy was applied. For a given training set, a template scan was selected from the positive samples and all other scans were registered to it. Each registered scan was then evenly split into k × k × k non-overlapping blocks and for each block the mean intensity was computed. This resulted in a 1 × k3 feature vector for each scan. The feature vectors were then used to train a SVM based classifier. In this feasibility study, four classifiers were built to identify anatomic coverages of brain, chest, abdomen-pelvis, and chest-abdomen-pelvis CT scans. Each classifier was trained and tested using a set of 300 scans from different subjects, composed of 150 positive samples and 150 negative samples. Area under the ROC curve (AUC) of the testing set was measured to evaluate the performance in a two-fold cross validation setting. Our results showed good classification performance with an average AUC of 0.96.

  8. Inter-observer agreement for Crohn's disease sub-phenotypes using the Montreal Classification: How good are we? A multi-centre Australasian study.

    PubMed

    Krishnaprasad, Krupa; Andrews, Jane M; Lawrance, Ian C; Florin, Timothy; Gearry, Richard B; Leong, Rupert W L; Mahy, Gillian; Bampton, Peter; Prosser, Ruth; Leach, Peta; Chitti, Laurie; Cock, Charles; Grafton, Rachel; Croft, Anthony R; Cooke, Sharon; Doecke, James D; Radford-Smith, Graham L

    2012-04-01

    Crohn's disease (CD) exhibits significant clinical heterogeneity. Classification systems attempt to describe this; however, their utility and reliability depends on inter-observer agreement (IOA). We therefore sought to evaluate IOA using the Montreal Classification (MC). De-identified clinical records of 35 CD patients from 6 Australian IBD centres were presented to 13 expert practitioners from 8 Australia and New Zealand Inflammatory Bowel Disease Consortium (ANZIBDC) centres. Practitioners classified the cases using MC and forwarded data for central blinded analysis. IOA on smoking and medications was also tested. Kappa statistics, with pre-specified outcomes of κ>0.8 excellent; 0.61-0.8 good; 0.41-0.6 moderate and ≤0.4 poor, were used. 97% of study cases had colonoscopy reports, however, only 31% had undergone a complete set of diagnostic investigations (colonoscopy, histology, SB imaging). At diagnosis, IOA was excellent for age, κ=0.84; good for disease location, κ=0.73; only moderate for upper GI disease (κ=0.57) and disease behaviour, κ=0.54; and good for the presence of perianal disease, κ=0.6. At last follow-up, IOA was good for location, κ=0.68; only moderate for upper GI disease (κ=0.43) and disease behaviour, κ=0.46; but excellent for the presence/absence of perianal disease, κ=0.88. IOA for immunosuppressant use ever and presence of stricture were both good (κ=0.79 and 0.64 respectively). IOA using MC is generally good; however some areas are less consistent than others. Omissions and inaccuracies reduce the value of clinical data when comparing cohorts across different centres, and may impair the ability to translate genetic discoveries into clinical practice. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  9. True Color Image Analysis For Determination Of Bone Growth In Fluorochromic Biopsies

    NASA Astrophysics Data System (ADS)

    Madachy, Raymond J.; Chotivichit, Lee; Huang, H. K.; Johnson, Eric E.

    1989-05-01

    A true color imaging technique has been developed for analysis of microscopic fluorochromic bone biopsy images to quantify new bone growth. The technique searches for specified colors in a medical image for quantification of areas of interest. Based on a user supplied training set, a multispectral classification of pixel values is performed and used for segmenting the image. Good results were obtained when compared to manual tracings of new bone growth performed by an orthopedic surgeon. At a 95% confidence level, the hypothesis that there is no difference between the two methods can be accepted. Work is in progress to test bone biopsies with different colored stains and further optimize the analysis process using three-dimensional spectral ordering techniques.

  10. Multiscale Rotation-Invariant Convolutional Neural Networks for Lung Texture Classification.

    PubMed

    Wang, Qiangchang; Zheng, Yuanjie; Yang, Gongping; Jin, Weidong; Chen, Xinjian; Yin, Yilong

    2018-01-01

    We propose a new multiscale rotation-invariant convolutional neural network (MRCNN) model for classifying various lung tissue types on high-resolution computed tomography. MRCNN employs Gabor-local binary pattern that introduces a good property in image analysis-invariance to image scales and rotations. In addition, we offer an approach to deal with the problems caused by imbalanced number of samples between different classes in most of the existing works, accomplished by changing the overlapping size between the adjacent patches. Experimental results on a public interstitial lung disease database show a superior performance of the proposed method to state of the art.

  11. Hydrophilic interaction ultra-performance liquid chromatography coupled with triple-quadrupole tandem mass spectrometry for highly rapid and sensitive analysis of underivatized amino acids in functional foods.

    PubMed

    Zhou, Guisheng; Pang, Hanqing; Tang, Yuping; Yao, Xin; Mo, Xuan; Zhu, Shaoqing; Guo, Sheng; Qian, Dawei; Qian, Yefei; Su, Shulan; Zhang, Li; Jin, Chun; Qin, Yong; Duan, Jin-ao

    2013-05-01

    This work presented a new analytical methodology based on hydrophilic interaction ultra-performance liquid chromatography coupled with triple-quadrupole tandem mass spectrometry in multiple-reaction monitoring mode for analysis of 24 underivatized free amino acids (FAAs) in functional foods. The proposed method was first reported and validated by assessing the matrix effects, linearity, limit of detections and limit of quantifications, precision, repeatability, stability and recovery of all target compounds, and it was used to determine the nutritional substances of FAAs in ginkgo seeds and further elucidate the nutritional value of this functional food. The result showed that ginkgo seed turned out to be a good source of FAAs with high levels of several essential FAAs and to have a good nutritional value. Furthermore, the principal component analysis was performed to classify the ginkgo seed samples on the basis of 24 FAAs. As a result, the samples could be mainly clustered into three groups, which were similar to areas classification. Overall, the presented method would be useful for the investigation of amino acids in edible plants and agricultural products.

  12. An automatic classifier of emotions built from entropy of noise.

    PubMed

    Ferreira, Jacqueline; Brás, Susana; Silva, Carlos F; Soares, Sandra C

    2017-04-01

    The electrocardiogram (ECG) signal has been widely used to study the physiological substrates of emotion. However, searching for better filtering techniques in order to obtain a signal with better quality and with the maximum relevant information remains an important issue for researchers in this field. Signal processing is largely performed for ECG analysis and interpretation, but this process can be susceptible to error in the delineation phase. In addition, it can lead to the loss of important information that is usually considered as noise and, consequently, discarded from the analysis. The goal of this study was to evaluate if the ECG noise allows for the classification of emotions, while using its entropy as an input in a decision tree classifier. We collected the ECG signal from 25 healthy participants while they were presented with videos eliciting negative (fear and disgust) and neutral emotions. The results indicated that the neutral condition showed a perfect identification (100%), whereas the classification of negative emotions indicated good identification performances (60% of sensitivity and 80% of specificity). These results suggest that the entropy of noise contains relevant information that can be useful to improve the analysis of the physiological correlates of emotion. © 2016 Society for Psychophysiological Research.

  13. A New Dusts Sensor for Cultural Heritage Applications Based on Image Processing

    PubMed Central

    Proietti, Andrea; Leccese, Fabio; Caciotta, Maurizio; Morresi, Fabio; Santamaria, Ulderico; Malomo, Carmela

    2014-01-01

    In this paper, we propose a new sensor for the detection and analysis of dusts (seen as powders and fibers) in indoor environments, especially designed for applications in the field of Cultural Heritage or in other contexts where the presence of dust requires special care (surgery, clean rooms, etc.). The presented system relies on image processing techniques (enhancement, noise reduction, segmentation, metrics analysis) and it allows obtaining both qualitative and quantitative information on the accumulation of dust. This information aims to identify the geometric and topological features of the elements of the deposit. The curators can use this information in order to design suitable prevention and maintenance actions for objects and environments. The sensor consists of simple and relatively cheap tools, based on a high-resolution image acquisition system, a preprocessing software to improve the captured image and an analysis algorithm for the feature extraction and the classification of the elements of the dust deposit. We carried out some tests in order to validate the system operation. These tests were performed within the Sistine Chapel in the Vatican Museums, showing the good performance of the proposed sensor in terms of execution time and classification accuracy. PMID:24901977

  14. Natural stimuli improve auditory BCIs with respect to ergonomics and performance

    NASA Astrophysics Data System (ADS)

    Höhne, Johannes; Krenzlin, Konrad; Dähne, Sven; Tangermann, Michael

    2012-08-01

    Moving from well-controlled, brisk artificial stimuli to natural and less-controlled stimuli seems counter-intuitive for event-related potential (ERP) studies. As natural stimuli typically contain a richer internal structure, they might introduce higher levels of variance and jitter in the ERP responses. Both characteristics are unfavorable for a good single-trial classification of ERPs in the context of a multi-class brain-computer interface (BCI) system, where the class-discriminant information between target stimuli and non-target stimuli must be maximized. For the application in an auditory BCI system, however, the transition from simple artificial tones to natural syllables can be useful despite the variance introduced. In the presented study, healthy users (N = 9) participated in an offline auditory nine-class BCI experiment with artificial and natural stimuli. It is shown that the use of syllables as natural stimuli does not only improve the users’ ergonomic ratings; also the classification performance is increased. Moreover, natural stimuli obtain a better balance in multi-class decisions, such that the number of systematic confusions between the nine classes is reduced. Hopefully, our findings may contribute to make auditory BCI paradigms more user friendly and applicable for patients.

  15. 28 CFR 522.15 - No good time credits for inmates serving only civil contempt commitments.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... only civil contempt commitments. 522.15 Section 522.15 Judicial Administration BUREAU OF PRISONS, DEPARTMENT OF JUSTICE INMATE ADMISSION, CLASSIFICATION, AND TRANSFER ADMISSION TO INSTITUTION Civil Contempt of Court Commitments § 522.15 No good time credits for inmates serving only civil contempt...

  16. Conditional High-Order Boltzmann Machines for Supervised Relation Learning.

    PubMed

    Huang, Yan; Wang, Wei; Wang, Liang; Tan, Tieniu

    2017-09-01

    Relation learning is a fundamental problem in many vision tasks. Recently, high-order Boltzmann machine and its variants have shown their great potentials in learning various types of data relation in a range of tasks. But most of these models are learned in an unsupervised way, i.e., without using relation class labels, which are not very discriminative for some challenging tasks, e.g., face verification. In this paper, with the goal to perform supervised relation learning, we introduce relation class labels into conventional high-order multiplicative interactions with pairwise input samples, and propose a conditional high-order Boltzmann Machine (CHBM), which can learn to classify the data relation in a binary classification way. To be able to deal with more complex data relation, we develop two improved variants of CHBM: 1) latent CHBM, which jointly performs relation feature learning and classification, by using a set of latent variables to block the pathway from pairwise input samples to output relation labels and 2) gated CHBM, which untangles factors of variation in data relation, by exploiting a set of latent variables to multiplicatively gate the classification of CHBM. To reduce the large number of model parameters generated by the multiplicative interactions, we approximately factorize high-order parameter tensors into multiple matrices. Then, we develop efficient supervised learning algorithms, by first pretraining the models using joint likelihood to provide good parameter initialization, and then finetuning them using conditional likelihood to enhance the discriminant ability. We apply the proposed models to a series of tasks including invariant recognition, face verification, and action similarity labeling. Experimental results demonstrate that by exploiting supervised relation labels, our models can greatly improve the performance.

  17. Short text sentiment classification based on feature extension and ensemble classifier

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Zhu, Xie

    2018-05-01

    With the rapid development of Internet social media, excavating the emotional tendencies of the short text information from the Internet, the acquisition of useful information has attracted the attention of researchers. At present, the commonly used can be attributed to the rule-based classification and statistical machine learning classification methods. Although micro-blog sentiment analysis has made good progress, there still exist some shortcomings such as not highly accurate enough and strong dependence from sentiment classification effect. Aiming at the characteristics of Chinese short texts, such as less information, sparse features, and diverse expressions, this paper considers expanding the original text by mining related semantic information from the reviews, forwarding and other related information. First, this paper uses Word2vec to compute word similarity to extend the feature words. And then uses an ensemble classifier composed of SVM, KNN and HMM to analyze the emotion of the short text of micro-blog. The experimental results show that the proposed method can make good use of the comment forwarding information to extend the original features. Compared with the traditional method, the accuracy, recall and F1 value obtained by this method have been improved.

  18. Ventricular beat classifier using fractal number clustering.

    PubMed

    Bakardjian, H

    1992-09-01

    A two-stage ventricular beat 'associative' classification procedure is described. The first stage separates typical beats from extrasystoles on the basis of area and polarity rules. At the second stage, the extrasystoles are classified in self-organised cluster formations of adjacent shape parameter values. This approach avoids the use of threshold values for discrimination between ectopic beats of different shapes, which could be critical in borderline cases. A pattern shape feature conventionally called a 'fractal number', in combination with a polarity attribute, was found to be a good criterion for waveform evaluation. An additional advantage of this pattern classification method is its good computational efficiency, which affords the opportunity to implement it in real-time systems.

  19. A Classification Scheme for Smart Manufacturing Systems’ Performance Metrics

    PubMed Central

    Lee, Y. Tina; Kumaraguru, Senthilkumaran; Jain, Sanjay; Robinson, Stefanie; Helu, Moneer; Hatim, Qais Y.; Rachuri, Sudarsan; Dornfeld, David; Saldana, Christopher J.; Kumara, Soundar

    2017-01-01

    This paper proposes a classification scheme for performance metrics for smart manufacturing systems. The discussion focuses on three such metrics: agility, asset utilization, and sustainability. For each of these metrics, we discuss classification themes, which we then use to develop a generalized classification scheme. In addition to the themes, we discuss a conceptual model that may form the basis for the information necessary for performance evaluations. Finally, we present future challenges in developing robust, performance-measurement systems for real-time, data-intensive enterprises. PMID:28785744

  20. I-CAN: the classification and prediction of support needs.

    PubMed

    Arnold, Samuel R C; Riches, Vivienne C; Stancliffe, Roger J

    2014-03-01

    Since 1992, the diagnosis and classification of intellectual disability has been dependent upon three constructs: intelligence, adaptive behaviour and support needs (Luckasson et al. 1992. Mental Retardation: Definition, Classification and Systems of Support. American Association on Intellectual and Developmental Disability, Washington, DC). While the methods and instruments to measure intelligence and adaptive behaviour are well established and generally accepted, the measurement and classification of support needs is still in its infancy. This article explores the measurement and classification of support needs. A study is presented comparing scores on the ICF (WHO, 2001) based I-CAN v4.2 support needs assessment and planning tool with expert clinical judgment using a proposed classification of support needs. A logical classification algorithm was developed and validated on a separate sample. Good internal consistency (range 0.73-0.91, N = 186) and criterion validity (κ = 0.94, n = 49) were found. Further advances in our understanding and measurement of support needs could change the way we assess, describe and classify disability. © 2013 John Wiley & Sons Ltd.

  1. Combining Review Text Content and Reviewer-Item Rating Matrix to Predict Review Rating

    PubMed Central

    Wang, Bingkun; Huang, Yongfeng; Li, Xing

    2016-01-01

    E-commerce develops rapidly. Learning and taking good advantage of the myriad reviews from online customers has become crucial to the success in this game, which calls for increasingly more accuracy in sentiment classification of these reviews. Therefore the finer-grained review rating prediction is preferred over the rough binary sentiment classification. There are mainly two types of method in current review rating prediction. One includes methods based on review text content which focus almost exclusively on textual content and seldom relate to those reviewers and items remarked in other relevant reviews. The other one contains methods based on collaborative filtering which extract information from previous records in the reviewer-item rating matrix, however, ignoring review textual content. Here we proposed a framework for review rating prediction which shows the effective combination of the two. Then we further proposed three specific methods under this framework. Experiments on two movie review datasets demonstrate that our review rating prediction framework has better performance than those previous methods. PMID:26880879

  2. Application of Hyperspectral Imaging to Detect Sclerotinia sclerotiorum on Oilseed Rape Stems

    PubMed Central

    Kong, Wenwen; Zhang, Chu; Huang, Weihao

    2018-01-01

    Hyperspectral imaging covering the spectral range of 384–1034 nm combined with chemometric methods was used to detect Sclerotinia sclerotiorum (SS) on oilseed rape stems by two sample sets (60 healthy and 60 infected stems for each set). Second derivative spectra and PCA loadings were used to select the optimal wavelengths. Discriminant models were built and compared to detect SS on oilseed rape stems, including partial least squares-discriminant analysis, radial basis function neural network, support vector machine and extreme learning machine. The discriminant models using full spectra and optimal wavelengths showed good performance with classification accuracies of over 80% for the calibration and prediction set. Comparing all developed models, the optimal classification accuracies of the calibration and prediction set were over 90%. The similarity of selected optimal wavelengths also indicated the feasibility of using hyperspectral imaging to detect SS on oilseed rape stems. The results indicated that hyperspectral imaging could be used as a fast, non-destructive and reliable technique to detect plant diseases on stems. PMID:29300315

  3. Identification of vegetable oil botanical speciation in refined vegetable oil blends using an innovative combination of chromatographic and spectroscopic techniques.

    PubMed

    Osorio, Maria Teresa; Haughey, Simon A; Elliott, Christopher T; Koidis, Anastasios

    2015-12-15

    European Regulation 1169/2011 requires producers of foods that contain refined vegetable oils to label the oil types. A novel rapid and staged methodology has been developed for the first time to identify common oil species in oil blends. The qualitative method consists of a combination of a Fourier Transform Infrared (FTIR) spectroscopy to profile the oils and fatty acid chromatographic analysis to confirm the composition of the oils when required. Calibration models and specific classification criteria were developed and all data were fused into a simple decision-making system. The single lab validation of the method demonstrated the very good performance (96% correct classification, 100% specificity, 4% false positive rate). Only a small fraction of the samples needed to be confirmed with the majority of oils identified rapidly using only the spectroscopic procedure. The results demonstrate the huge potential of the methodology for a wide range of oil authenticity work. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Combining Review Text Content and Reviewer-Item Rating Matrix to Predict Review Rating.

    PubMed

    Wang, Bingkun; Huang, Yongfeng; Li, Xing

    2016-01-01

    E-commerce develops rapidly. Learning and taking good advantage of the myriad reviews from online customers has become crucial to the success in this game, which calls for increasingly more accuracy in sentiment classification of these reviews. Therefore the finer-grained review rating prediction is preferred over the rough binary sentiment classification. There are mainly two types of method in current review rating prediction. One includes methods based on review text content which focus almost exclusively on textual content and seldom relate to those reviewers and items remarked in other relevant reviews. The other one contains methods based on collaborative filtering which extract information from previous records in the reviewer-item rating matrix, however, ignoring review textual content. Here we proposed a framework for review rating prediction which shows the effective combination of the two. Then we further proposed three specific methods under this framework. Experiments on two movie review datasets demonstrate that our review rating prediction framework has better performance than those previous methods.

  5. Forecasting the Emergency Department Patients Flow.

    PubMed

    Afilal, Mohamed; Yalaoui, Farouk; Dugardin, Frédéric; Amodeo, Lionel; Laplanche, David; Blua, Philippe

    2016-07-01

    Emergency department (ED) have become the patient's main point of entrance in modern hospitals causing it frequent overcrowding, thus hospital managers are increasingly paying attention to the ED in order to provide better quality service for patients. One of the key elements for a good management strategy is demand forecasting. In this case, forecasting patients flow, which will help decision makers to optimize human (doctors, nurses…) and material(beds, boxs…) resources allocation. The main interest of this research is forecasting daily attendance at an emergency department. The study was conducted on the Emergency Department of Troyes city hospital center, France, in which we propose a new practical ED patients classification that consolidate the CCMU and GEMSA categories into one category and innovative time-series based models to forecast long and short term daily attendance. The models we developed for this case study shows very good performances (up to 91,24 % for the annual Total flow forecast) and robustness to epidemic periods.

  6. Prediction of pelvic organ prolapse using an artificial neural network.

    PubMed

    Robinson, Christopher J; Swift, Steven; Johnson, Donna D; Almeida, Jonas S

    2008-08-01

    The objective of this investigation was to test the ability of a feedforward artificial neural network (ANN) to differentiate patients who have pelvic organ prolapse (POP) from those who retain good pelvic organ support. Following institutional review board approval, patients with POP (n = 87) and controls with good pelvic organ support (n = 368) were identified from the urogynecology research database. Historical and clinical information was extracted from the database. Data analysis included the training of a feedforward ANN, variable selection, and external validation of the model with an independent data set. Twenty variables were used. The median-performing ANN model used a median of 3 (quartile 1:3 to quartile 3:5) variables and achieved an area under the receiver operator curve of 0.90 (external, independent validation set). Ninety percent sensitivity and 83% specificity were obtained in the external validation by ANN classification. Feedforward ANN modeling is applicable to the identification and prediction of POP.

  7. The Hearing Outcomes of Cochlear Implantation in Waardenburg Syndrome.

    PubMed

    Koyama, Hajime; Kashio, Akinori; Sakata, Aki; Tsutsumiuchi, Katsuhiro; Matsumoto, Yu; Karino, Shotaro; Kakigi, Akinobu; Iwasaki, Shinichi; Yamasoba, Tatsuya

    2016-01-01

    Objectives. This study aimed to determine the feasibility of cochlear implantation for sensorineural hearing loss in patients with Waardenburg syndrome. Method. A retrospective chart review was performed on patients who underwent cochlear implantation at the University of Tokyo Hospital. Clinical classification, genetic mutation, clinical course, preoperative hearing threshold, high-resolution computed tomography of the temporal bone, and postoperative hearing outcome were assessed. Result. Five children with Waardenburg syndrome underwent cochlear implantation. The average age at implantation was 2 years 11 months (ranging from 1 year 9 months to 6 years 3 months). Four patients had congenital profound hearing loss and one patient had progressive hearing loss. Two patients had an inner ear malformation of cochlear incomplete partition type 2. No surgical complication or difficulty was seen in any patient. All patients showed good hearing outcome postoperatively. Conclusion. Cochlear implantation could be a good treatment option for Waardenburg syndrome.

  8. The Hearing Outcomes of Cochlear Implantation in Waardenburg Syndrome

    PubMed Central

    Koyama, Hajime; Kashio, Akinori; Sakata, Aki; Tsutsumiuchi, Katsuhiro; Matsumoto, Yu; Karino, Shotaro; Kakigi, Akinobu; Iwasaki, Shinichi; Yamasoba, Tatsuya

    2016-01-01

    Objectives. This study aimed to determine the feasibility of cochlear implantation for sensorineural hearing loss in patients with Waardenburg syndrome. Method. A retrospective chart review was performed on patients who underwent cochlear implantation at the University of Tokyo Hospital. Clinical classification, genetic mutation, clinical course, preoperative hearing threshold, high-resolution computed tomography of the temporal bone, and postoperative hearing outcome were assessed. Result. Five children with Waardenburg syndrome underwent cochlear implantation. The average age at implantation was 2 years 11 months (ranging from 1 year 9 months to 6 years 3 months). Four patients had congenital profound hearing loss and one patient had progressive hearing loss. Two patients had an inner ear malformation of cochlear incomplete partition type 2. No surgical complication or difficulty was seen in any patient. All patients showed good hearing outcome postoperatively. Conclusion. Cochlear implantation could be a good treatment option for Waardenburg syndrome. PMID:27376080

  9. Determination of optimum threshold values for EMG time domain features; a multi-dataset investigation

    NASA Astrophysics Data System (ADS)

    Nlandu Kamavuako, Ernest; Scheme, Erik Justin; Englehart, Kevin Brian

    2016-08-01

    Objective. For over two decades, Hudgins’ set of time domain features have extensively been applied for classification of hand motions. The calculation of slope sign change and zero crossing features uses a threshold to attenuate the effect of background noise. However, there is no consensus on the optimum threshold value. In this study, we investigate for the first time the effect of threshold selection on the feature space and classification accuracy using multiple datasets. Approach. In the first part, four datasets were used, and classification error (CE), separability index, scatter matrix separability criterion, and cardinality of the features were used as performance measures. In the second part, data from eight classes were collected during two separate days with two days in between from eight able-bodied subjects. The threshold for each feature was computed as a factor (R = 0:0.01:4) times the average root mean square of data during rest. For each day, we quantified CE for R = 0 (CEr0) and minimum error (CEbest). Moreover, a cross day threshold validation was applied where, for example, CE of day two (CEodt) is computed based on optimum threshold from day one and vice versa. Finally, we quantified the effect of the threshold when using training data from one day and test data of the other. Main results. All performance metrics generally degraded with increasing threshold values. On average, CEbest (5.26 ± 2.42%) was significantly better than CEr0 (7.51 ± 2.41%, P = 0.018), and CEodt (7.50 ± 2.50%, P = 0.021). During the two-fold validation between days, CEbest performed similar to CEr0. Interestingly, when using the threshold values optimized per subject from day one and day two respectively, on the cross-days classification, the performance decreased. Significance. We have demonstrated that threshold value has a strong impact on the feature space and that an optimum threshold can be quantified. However, this optimum threshold is highly data and subject driven and thus do not generalize well. There is a strong evidence that R = 0 provides a good trade-off between system performance and generalization. These findings are important for practical use of pattern recognition based myoelectric control.

  10. Determination of optimum threshold values for EMG time domain features; a multi-dataset investigation.

    PubMed

    Kamavuako, Ernest Nlandu; Scheme, Erik Justin; Englehart, Kevin Brian

    2016-08-01

    For over two decades, Hudgins' set of time domain features have extensively been applied for classification of hand motions. The calculation of slope sign change and zero crossing features uses a threshold to attenuate the effect of background noise. However, there is no consensus on the optimum threshold value. In this study, we investigate for the first time the effect of threshold selection on the feature space and classification accuracy using multiple datasets. In the first part, four datasets were used, and classification error (CE), separability index, scatter matrix separability criterion, and cardinality of the features were used as performance measures. In the second part, data from eight classes were collected during two separate days with two days in between from eight able-bodied subjects. The threshold for each feature was computed as a factor (R = 0:0.01:4) times the average root mean square of data during rest. For each day, we quantified CE for R = 0 (CEr0) and minimum error (CEbest). Moreover, a cross day threshold validation was applied where, for example, CE of day two (CEodt) is computed based on optimum threshold from day one and vice versa. Finally, we quantified the effect of the threshold when using training data from one day and test data of the other. All performance metrics generally degraded with increasing threshold values. On average, CEbest (5.26 ± 2.42%) was significantly better than CEr0 (7.51 ± 2.41%, P = 0.018), and CEodt (7.50 ± 2.50%, P = 0.021). During the two-fold validation between days, CEbest performed similar to CEr0. Interestingly, when using the threshold values optimized per subject from day one and day two respectively, on the cross-days classification, the performance decreased. We have demonstrated that threshold value has a strong impact on the feature space and that an optimum threshold can be quantified. However, this optimum threshold is highly data and subject driven and thus do not generalize well. There is a strong evidence that R = 0 provides a good trade-off between system performance and generalization. These findings are important for practical use of pattern recognition based myoelectric control.

  11. A new hierarchical method for inter-patient heartbeat classification using random projections and RR intervals

    PubMed Central

    2014-01-01

    Background The inter-patient classification schema and the Association for the Advancement of Medical Instrumentation (AAMI) standards are important to the construction and evaluation of automated heartbeat classification systems. The majority of previously proposed methods that take the above two aspects into consideration use the same features and classification method to classify different classes of heartbeats. The performance of the classification system is often unsatisfactory with respect to the ventricular ectopic beat (VEB) and supraventricular ectopic beat (SVEB). Methods Based on the different characteristics of VEB and SVEB, a novel hierarchical heartbeat classification system was constructed. This was done in order to improve the classification performance of these two classes of heartbeats by using different features and classification methods. First, random projection and support vector machine (SVM) ensemble were used to detect VEB. Then, the ratio of the RR interval was compared to a predetermined threshold to detect SVEB. The optimal parameters for the classification models were selected on the training set and used in the independent testing set to assess the final performance of the classification system. Meanwhile, the effect of different lead configurations on the classification results was evaluated. Results Results showed that the performance of this classification system was notably superior to that of other methods. The VEB detection sensitivity was 93.9% with a positive predictive value of 90.9%, and the SVEB detection sensitivity was 91.1% with a positive predictive value of 42.2%. In addition, this classification process was relatively fast. Conclusions A hierarchical heartbeat classification system was proposed based on the inter-patient data division to detect VEB and SVEB. It demonstrated better classification performance than existing methods. It can be regarded as a promising system for detecting VEB and SVEB of unknown patients in clinical practice. PMID:24981916

  12. Treatment outcomes of saddle nose correction.

    PubMed

    Hyun, Sang Min; Jang, Yong Ju

    2013-01-01

    Many valuable classification schemes for saddle nose have been suggested that integrate clinical deformity and treatment; however, there is no consensus regarding the most suitable classification and surgical method for saddle nose correction. To present clinical characteristics and treatment outcome of saddle nose deformity and to propose a modified classification system to better characterize the variety of different saddle nose deformities. The retrospective study included 91 patients who underwent rhinoplasty for correction of saddle nose from April 1, 2003, through December 31, 2011, with a minimum follow-up of 8 months. Saddle nose was classified into 4 types according to a modified classification. Aesthetic outcomes were classified as excellent, good, fair, or poor. Patients underwent minor cosmetic concealment by dorsal augmentation (n = 8) or major septal reconstruction combined with dorsal augmentation (n = 83). Autologous costal cartilages were used in 40 patients (44%), and homologous costal cartilages were used in 5 patients (6%). According to postoperative assessment, 29 patients had excellent, 42 patients had good, 18 patients had fair, and 2 patients had poor aesthetic outcomes. No statistical difference in surgical outcome according to saddle nose classification was observed. Eight patients underwent revision rhinoplasty, owing to recurrence of saddle, wound infection, or warping of the costal cartilage for dorsal augmentation. We introduce a modified saddle nose classification scheme that is simpler and better able to characterize different deformities. Among 91 patients with saddle nose, 20 (22%) had unsuccessful outcomes (fair or poor) and 8 (9%) underwent subsequent revision rhinoplasty. Thus, management of saddle nose deformities remains challenging. 4.

  13. Contrast-Induced Nephropathy Is Less Common in Patients with Good Coronary Collateral Circulation.

    PubMed

    Avci, Eyup; Yildirim, Tarik; Kadi, Hasan

    2017-10-01

    Contrast-induced nephropathy (CIN) is a typically reversible type of acute renal failure that develops after exposure to contrast agents; underlying endothelial dysfunction is thought to be an important risk factor for CIN. Although the mechanism of coronary collateral circulation (CCC) is not fully understood, a pivotal role of the endothelium has been reported in many studies. The aim of this study was to investigate whether there is a relationship between CCC and CIN. Patients with at least one occluded major coronary artery and blood creatinine analyses performed before and on the second day after angiography were included in the study. CIN was defined as a 25% or greater elevation of creatinine on the second day after exposure to the contrast agent. Collateral grading was performed according to the Rentrop classification. Patients were grouped according to whether they developed CIN or not, i.e., CIN(-) and CIN(+) group. A total of 214 patients who met the inclusion criteria were included in the study. CIN was diagnosed in 43 patients (20.1%) in the study population. Good CCC was identified in 112 patients (65.5%) in the CIN(-) group, whereas it was identified in 13 patients (30.2%) in the CIN(+) group. In the CIN(-) group, good CCC was significantly more frequent ( p < 0.001). Furthermore, collateral circulation was an independent predictor of CIN. Good collateral circulation was associated with a lower frequency of CIN, and poor collateral circulation was an independent predictor of CIN.

  14. Logic Learning Machine creates explicit and stable rules stratifying neuroblastoma patients

    PubMed Central

    2013-01-01

    Background Neuroblastoma is the most common pediatric solid tumor. About fifty percent of high risk patients die despite treatment making the exploration of new and more effective strategies for improving stratification mandatory. Hypoxia is a condition of low oxygen tension occurring in poorly vascularized areas of the tumor associated with poor prognosis. We had previously defined a robust gene expression signature measuring the hypoxic component of neuroblastoma tumors (NB-hypo) which is a molecular risk factor. We wanted to develop a prognostic classifier of neuroblastoma patients' outcome blending existing knowledge on clinical and molecular risk factors with the prognostic NB-hypo signature. Furthermore, we were interested in classifiers outputting explicit rules that could be easily translated into the clinical setting. Results Shadow Clustering (SC) technique, which leads to final models called Logic Learning Machine (LLM), exhibits a good accuracy and promises to fulfill the aims of the work. We utilized this algorithm to classify NB-patients on the bases of the following risk factors: Age at diagnosis, INSS stage, MYCN amplification and NB-hypo. The algorithm generated explicit classification rules in good agreement with existing clinical knowledge. Through an iterative procedure we identified and removed from the dataset those examples which caused instability in the rules. This workflow generated a stable classifier very accurate in predicting good and poor outcome patients. The good performance of the classifier was validated in an independent dataset. NB-hypo was an important component of the rules with a strength similar to that of tumor staging. Conclusions The novelty of our work is to identify stability, explicit rules and blending of molecular and clinical risk factors as the key features to generate classification rules for NB patients to be conveyed to the clinic and to be used to design new therapies. We derived, through LLM, a set of four stable rules identifying a new class of poor outcome patients that could benefit from new therapies potentially targeting tumor hypoxia or its consequences. PMID:23815266

  15. Logic Learning Machine creates explicit and stable rules stratifying neuroblastoma patients.

    PubMed

    Cangelosi, Davide; Blengio, Fabiola; Versteeg, Rogier; Eggert, Angelika; Garaventa, Alberto; Gambini, Claudio; Conte, Massimo; Eva, Alessandra; Muselli, Marco; Varesio, Luigi

    2013-01-01

    Neuroblastoma is the most common pediatric solid tumor. About fifty percent of high risk patients die despite treatment making the exploration of new and more effective strategies for improving stratification mandatory. Hypoxia is a condition of low oxygen tension occurring in poorly vascularized areas of the tumor associated with poor prognosis. We had previously defined a robust gene expression signature measuring the hypoxic component of neuroblastoma tumors (NB-hypo) which is a molecular risk factor. We wanted to develop a prognostic classifier of neuroblastoma patients' outcome blending existing knowledge on clinical and molecular risk factors with the prognostic NB-hypo signature. Furthermore, we were interested in classifiers outputting explicit rules that could be easily translated into the clinical setting. Shadow Clustering (SC) technique, which leads to final models called Logic Learning Machine (LLM), exhibits a good accuracy and promises to fulfill the aims of the work. We utilized this algorithm to classify NB-patients on the bases of the following risk factors: Age at diagnosis, INSS stage, MYCN amplification and NB-hypo. The algorithm generated explicit classification rules in good agreement with existing clinical knowledge. Through an iterative procedure we identified and removed from the dataset those examples which caused instability in the rules. This workflow generated a stable classifier very accurate in predicting good and poor outcome patients. The good performance of the classifier was validated in an independent dataset. NB-hypo was an important component of the rules with a strength similar to that of tumor staging. The novelty of our work is to identify stability, explicit rules and blending of molecular and clinical risk factors as the key features to generate classification rules for NB patients to be conveyed to the clinic and to be used to design new therapies. We derived, through LLM, a set of four stable rules identifying a new class of poor outcome patients that could benefit from new therapies potentially targeting tumor hypoxia or its consequences.

  16. The Oxfordshire Community Stroke Project classification: correlation with imaging, associated complications, and prediction of outcome in acute ischemic stroke.

    PubMed

    Pittock, Sean J; Meldrum, Dara; Hardiman, Orla; Thornton, John; Brennan, Paul; Moroney, Joan T

    2003-01-01

    This preliminary study investigates the risk factor profile, post stroke complications, and outcome for four OCSP (Oxfordshire Community Stroke Project Classification) subtypes. One hundred seventeen consecutive ischemic stroke patients were clinically classified into 1 of 4 subtypes: total anterior (TACI), partial anterior (PACI), lacunar (LACI), and posterior (POCI) circulation infarcts. Study evaluations were performed at admission, 2 weeks, and 6 months. There was a good correlation between clinical classification and radiological diagnosis if a negative CT head was considered consistent with a lacunar infarction. No significant difference in risk factor profile was observed between subtypes. The TACI group had significantly higher mortality (P < .001), morbidity (P < .001, as per disability scales), length of hospital stay (P < .001), and complications (respiratory tract infection and seizures [P < .01]) as compared to the other three groups which were all similar at the different time points. The only significant difference found was the higher rate of stroke recurrence within the first 6 months in the POCI group (P < .001). The OCSP classification identifies two major groups (TACI and other 3 groups combined) who behave differently with respect to post stroke outcome. Further study with larger numbers of patients and thus greater power will be required to allow better discrimination of OCSP subtypes in respect of risk factors, complications, and outcomes if the OCSP is to be used to stratify patients in clinical trials.

  17. Sparse Multivariate Autoregressive Modeling for Mild Cognitive Impairment Classification

    PubMed Central

    Li, Yang; Wee, Chong-Yaw; Jie, Biao; Peng, Ziwen

    2014-01-01

    Brain connectivity network derived from functional magnetic resonance imaging (fMRI) is becoming increasingly prevalent in the researches related to cognitive and perceptual processes. The capability to detect causal or effective connectivity is highly desirable for understanding the cooperative nature of brain network, particularly when the ultimate goal is to obtain good performance of control-patient classification with biological meaningful interpretations. Understanding directed functional interactions between brain regions via brain connectivity network is a challenging task. Since many genetic and biomedical networks are intrinsically sparse, incorporating sparsity property into connectivity modeling can make the derived models more biologically plausible. Accordingly, we propose an effective connectivity modeling of resting-state fMRI data based on the multivariate autoregressive (MAR) modeling technique, which is widely used to characterize temporal information of dynamic systems. This MAR modeling technique allows for the identification of effective connectivity using the Granger causality concept and reducing the spurious causality connectivity in assessment of directed functional interaction from fMRI data. A forward orthogonal least squares (OLS) regression algorithm is further used to construct a sparse MAR model. By applying the proposed modeling to mild cognitive impairment (MCI) classification, we identify several most discriminative regions, including middle cingulate gyrus, posterior cingulate gyrus, lingual gyrus and caudate regions, in line with results reported in previous findings. A relatively high classification accuracy of 91.89 % is also achieved, with an increment of 5.4 % compared to the fully-connected, non-directional Pearson-correlation-based functional connectivity approach. PMID:24595922

  18. 7 CFR 52.778 - Color.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Color. 52.778 Section 52.778 Agriculture Regulations... Cherries 1 Factors of Quality § 52.778 Color. (a) (A) classification. Canned red tart pitted cherries that have a good color may be given a score of 18 to 20 points. “Good color” means a practically uniform...

  19. 7 CFR 52.778 - Color.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Color. 52.778 Section 52.778 Agriculture Regulations... Cherries 1 Factors of Quality § 52.778 Color. (a) (A) classification. Canned red tart pitted cherries that have a good color may be given a score of 18 to 20 points. “Good color” means a practically uniform...

  20. 7 CFR 52.1006 - Color.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Color. 52.1006 Section 52.1006 Agriculture Regulations... United States Standards for Grades of Dates Factors of Quality § 52.1006 Color. (a) (A) classification. Whole or pitted dates that possess a good color may be given a score of 18 to 20 points. “Good color...

  1. Project DIPOLE WEST - Multiburst Environment (Non-Simultaneous Detonations)

    DTIC Science & Technology

    1976-09-01

    PAGE (WIMn Dat• Bntered) Unclassified SECURITY CLASSIFICATION OP’ THIS PAGE(ft• Data .Bnt......, 20. Abstract Purpose of the series was to obtain...HULL hydrodynamic air blast code show good correlation. UNCLASSIFIED SECUFUTY CLASSIFICATION OF THIS PA.GE(When Date Bntered) • • 1...supervision. Contributions were also made by Dr. John Dewey, University of Victoria; Mr. A. P. R. Lambert, Canadian General Electric; Mr. Charles Needham

  2. A Proposed Methodology to Classify Frontier Capital Markets

    DTIC Science & Technology

    2011-07-31

    but because it is the surest route to our common good.” -Inaugural Speech by President Barack Obama, Jan 2009 This project involves basic...machine learning. The algorithm consists of a unique binary classifier mechanism that combines three methods: k-Nearest Neighbors ( kNN ), ensemble...Through kNN Ensemble Classification Techniques E. Capital Market Classification Based on Capital Flows and Trading Architecture F. Horizontal

  3. A Proposed Methodology to Classify Frontier Capital Markets

    DTIC Science & Technology

    2011-07-31

    out of charity, but because it is the surest route to our common good.” -Inaugural Speech by President Barack Obama, Jan 2009 This project...identification, and machine learning. The algorithm consists of a unique binary classifier mechanism that combines three methods: k-Nearest Neighbors ( kNN ...Support Through kNN Ensemble Classification Techniques E. Capital Market Classification Based on Capital Flows and Trading Architecture F

  4. CANDELS Visual Classifications: Scheme, Data Release, and First Results

    NASA Astrophysics Data System (ADS)

    Kartaltepe, Jeyhan S.; Mozena, Mark; Kocevski, Dale; McIntosh, Daniel H.; Lotz, Jennifer; Bell, Eric F.; Faber, Sandy; Ferguson, Harry; Koo, David; Bassett, Robert; Bernyk, Maksym; Blancato, Kirsten; Bournaud, Frederic; Cassata, Paolo; Castellano, Marco; Cheung, Edmond; Conselice, Christopher J.; Croton, Darren; Dahlen, Tomas; de Mello, Duilia F.; DeGroot, Laura; Donley, Jennifer; Guedes, Javiera; Grogin, Norman; Hathi, Nimish; Hilton, Matt; Hollon, Brett; Koekemoer, Anton; Liu, Nick; Lucas, Ray A.; Martig, Marie; McGrath, Elizabeth; McPartland, Conor; Mobasher, Bahram; Morlock, Alice; O'Leary, Erin; Peth, Mike; Pforr, Janine; Pillepich, Annalisa; Rosario, David; Soto, Emmaris; Straughn, Amber; Telford, Olivia; Sunnquist, Ben; Trump, Jonathan; Weiner, Benjamin; Wuyts, Stijn; Inami, Hanae; Kassin, Susan; Lani, Caterina; Poole, Gregory B.; Rizer, Zachary

    2015-11-01

    We have undertaken an ambitious program to visually classify all galaxies in the five CANDELS fields down to H < 24.5 involving the dedicated efforts of over 65 individual classifiers. Once completed, we expect to have detailed morphological classifications for over 50,000 galaxies spanning 0 < z < 4 over all the fields, with classifications from 3 to 5 independent classifiers for each galaxy. Here, we present our detailed visual classification scheme, which was designed to cover a wide range of CANDELS science goals. This scheme includes the basic Hubble sequence types, but also includes a detailed look at mergers and interactions, the clumpiness of galaxies, k-corrections, and a variety of other structural properties. In this paper, we focus on the first field to be completed—GOODS-S, which has been classified at various depths. The wide area coverage spanning the full field (wide+deep+ERS) includes 7634 galaxies that have been classified by at least three different people. In the deep area of the field, 2534 galaxies have been classified by at least five different people at three different depths. With this paper, we release to the public all of the visual classifications in GOODS-S along with the Perl/Tk GUI that we developed to classify galaxies. We present our initial results here, including an analysis of our internal consistency and comparisons among multiple classifiers as well as a comparison to the Sérsic index. We find that the level of agreement among classifiers is quite good (>70% across the full magnitude range) and depends on both the galaxy magnitude and the galaxy type, with disks showing the highest level of agreement (>50%) and irregulars the lowest (<10%). A comparison of our classifications with the Sérsic index and rest-frame colors shows a clear separation between disk and spheroid populations. Finally, we explore morphological k-corrections between the V-band and H-band observations and find that a small fraction (84 galaxies in total) are classified as being very different between these two bands. These galaxies typically have very clumpy and extended morphology or are very faint in the V-band.

  5. 10 CFR 1045.9 - RD classification performance evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Program Management of the Restricted Data and Formerly Restricted Data Classification System § 1045.9 RD classification performance evaluation. (a) Heads of agencies shall ensure that RD management officials and those...

  6. Does the Spine Surgeon’s Experience Affect Fracture Classification, Assessment of Stability, and Treatment Plan in Thoracolumbar Injuries?

    PubMed Central

    Kanna, Rishi Mugesh; Schroeder, Gregory D.; Oner, Frank Cumhur; Vialle, Luiz; Chapman, Jens; Dvorak, Marcel; Fehlings, Michael; Shetty, Ajoy Prasad; Schnake, Klaus; Kandziora, Frank; Vaccaro, Alexander R.

    2017-01-01

    Study Design: Prospective survey-based study. Objectives: The AO Spine thoracolumbar injury classification has been shown to have good reproducibility among clinicians. However, the influence of spine surgeons’ clinical experience on fracture classification, stability assessment, and decision on management based on this classification has not been studied. Furthermore, the usefulness of varying imaging modalities including radiographs, computed tomography (CT) and magnetic resonance imaging (MRI) in the decision process was also studied. Methods: Forty-one spine surgeons from different regions, acquainted with the AOSpine classification system, were provided with 30 thoracolumbar fractures in a 3-step assessment: first radiographs, followed by CT and MRI. Surgeons classified the fracture, evaluated stability, chose management, and identified reasons for any changes. The surgeons were divided into 2 groups based on years of clinical experience as <10 years (n = 12) and >10 years (n = 29). Results: There were no significant differences between the 2 groups in correctly classifying A1, B2, and C type fractures. Surgeons with less experience had more correct diagnosis in classifying A3 (47.2% vs 38.5% in step 1, 73.6% vs 60.3% in step 2 and 77.8% vs 65.5% in step 3), A4 (16.7% vs 24.1% in step 1, 72.9% vs 57.8% in step 2 and 70.8% vs 56.0% in step3) and B1 injuries (31.9% vs 20.7% in step 1, 41.7% vs 36.8% in step 2 and 38.9% vs 33.9% in step 3). In the assessment of fracture stability and decision on treatment, the less and more experienced surgeons performed equally. The selection of a particular treatment plan varied in all subtypes except in A1 and C type injuries. Conclusion: Surgeons’ experience did not significantly affect overall fracture classification, evaluating stability and planning the treatment. Surgeons with less experience had a higher percentage of correct classification in A3 and A4 injuries. Despite variations between them in classification, the assessment of overall stability and management decisions were similar between the 2 groups. PMID:28815158

  7. Does the Spine Surgeon's Experience Affect Fracture Classification, Assessment of Stability, and Treatment Plan in Thoracolumbar Injuries?

    PubMed

    Rajasekaran, Shanmuganathan; Kanna, Rishi Mugesh; Schroeder, Gregory D; Oner, Frank Cumhur; Vialle, Luiz; Chapman, Jens; Dvorak, Marcel; Fehlings, Michael; Shetty, Ajoy Prasad; Schnake, Klaus; Kandziora, Frank; Vaccaro, Alexander R

    2017-06-01

    Prospective survey-based study. The AO Spine thoracolumbar injury classification has been shown to have good reproducibility among clinicians. However, the influence of spine surgeons' clinical experience on fracture classification, stability assessment, and decision on management based on this classification has not been studied. Furthermore, the usefulness of varying imaging modalities including radiographs, computed tomography (CT) and magnetic resonance imaging (MRI) in the decision process was also studied. Forty-one spine surgeons from different regions, acquainted with the AOSpine classification system, were provided with 30 thoracolumbar fractures in a 3-step assessment: first radiographs, followed by CT and MRI. Surgeons classified the fracture, evaluated stability, chose management, and identified reasons for any changes. The surgeons were divided into 2 groups based on years of clinical experience as <10 years (n = 12) and >10 years (n = 29). There were no significant differences between the 2 groups in correctly classifying A1, B2, and C type fractures. Surgeons with less experience had more correct diagnosis in classifying A3 (47.2% vs 38.5% in step 1, 73.6% vs 60.3% in step 2 and 77.8% vs 65.5% in step 3), A4 (16.7% vs 24.1% in step 1, 72.9% vs 57.8% in step 2 and 70.8% vs 56.0% in step3) and B1 injuries (31.9% vs 20.7% in step 1, 41.7% vs 36.8% in step 2 and 38.9% vs 33.9% in step 3). In the assessment of fracture stability and decision on treatment, the less and more experienced surgeons performed equally. The selection of a particular treatment plan varied in all subtypes except in A1 and C type injuries. Surgeons' experience did not significantly affect overall fracture classification, evaluating stability and planning the treatment. Surgeons with less experience had a higher percentage of correct classification in A3 and A4 injuries. Despite variations between them in classification, the assessment of overall stability and management decisions were similar between the 2 groups.

  8. The Future of Classification in Wheelchair Sports; Can Data Science and Technological Advancement Offer an Alternative Point of View?

    PubMed

    van der Slikke, Rienk M A; Bregman, Daan J J; Berger, Monique A M; de Witte, Annemarie M H; Veeger, Dirk-Jan H E J

    2017-11-01

    Classification is a defining factor for competition in wheelchair sports, but it is a delicate and time-consuming process with often questionable validity. 1 New inertial sensor based measurement methods applied in match play and field tests, allow for more precise and objective estimates of the impairment effect on wheelchair mobility performance. It was evaluated if these measures could offer an alternative point of view for classification. Six standard wheelchair mobility performance outcomes of different classification groups were measured in match play (n=29), as well as best possible performance in a field test (n=47). In match-results a clear relationship between classification and performance level is shown, with increased performance outcomes in each adjacent higher classification group. Three outcomes differed significantly between the low and mid-class groups, and one between the mid and high-class groups. In best performance (field test), a split between the low and mid-class groups shows (5 out of 6 outcomes differed significantly) but hardly any difference between the mid and high-class groups. This observed split was confirmed by cluster analysis, revealing the existence of only two performance based clusters. The use of inertial sensor technology to get objective measures of wheelchair mobility performance, combined with a standardized field-test, brought alternative views for evidence based classification. The results of this approach provided arguments for a reduced number of classes in wheelchair basketball. Future use of inertial sensors in match play and in field testing could enhance evaluation of classification guidelines as well as individual athlete performance.

  9. Comparability of river quality assessment using macrophytes: a multi-step procedure to overcome biogeographical differences.

    PubMed

    Aguiar, F C; Segurado, P; Urbanič, G; Cambra, J; Chauvin, C; Ciadamidaro, S; Dörflinger, G; Ferreira, J; Germ, M; Manolaki, P; Minciardi, M R; Munné, A; Papastergiadou, E; Ferreira, M T

    2014-04-01

    This paper exposes a new methodological approach to solve the problem of intercalibrating river quality national methods when a common metric is lacking and most of the countries share the same Water Framework Directive (WFD) assessment method. We provide recommendations for similar works in future concerning the assessment of ecological accuracy and highlight the importance of a good common ground to make feasible the scientific work beyond the intercalibration. The approach herein presented was applied to highly seasonal rivers of the Mediterranean Geographical Intercalibration Group for the Biological Quality Element Macrophytes. The Mediterranean Group of river macrophytes involved seven countries and two assessment methods with similar acquisition data and assessment concept: the Macrophyte Biological Index for Rivers (IBMR) for Cyprus, France, Greece, Italy, Portugal and Spain, and the River Macrophyte Index (RMI) for Slovenia. Database included 318 sites of which 78 were considered as benchmarks. The boundary harmonization was performed for common WFD-assessment methods (all countries except Slovenia) using the median of the Good/Moderate and High/Good boundaries of all countries. Then, whenever possible, the Slovenian method, RMI was computed for the entire database. The IBMR was also computed for the Slovenian sites and was regressed against RMI in order to check the relatedness of methods (R(2)=0.45; p<0.00001) and to convert RMI boundaries into the IBMR scale. The boundary bias of RMI was computed using direct comparison of classification and the median boundary values following boundary harmonization. The average absolute class differences after harmonization is 26% and the percentage of classifications differing by half of a quality class is also small (16.4%). This multi-step approach to the intercalibration was endorsed by the WFD Regulatory Committee. © 2013 Elsevier B.V. All rights reserved.

  10. Influence of birthweight on childhood balance: Evidence from two British birth cohorts.

    PubMed

    Okuda, Paola Matiko Martins; Swardfager, Walter; Ploubidis, George B; Pangelinan, Melissa; Cogo-Moreira, Hugo

    2018-01-26

    Birthweight is an important predictor of various fundamental aspects of childhood health and development. To examine the impact of birthweight on childhood balance performance classification and verify if this is replicable and consistent in different populations. Prospective birth cohort study. To describe heterogeneity in balance skills, latent class analyses were conducted separately with data from the 1958 National Child Development Study - NCDS (n = 12,778), and the 1970 British Cohort Study - BCS (n = 12,115). Four balance tasks for NCDS and five balance tasks for BCS. Birthweight was assessed as a predictor of balance skills. In both cohorts, two latent classes (good and poor balance skills) were identified. In both cohorts, higher birthweight was associated with a higher likelihood of having good balance skills. Boys were less likely to have good balance compared to girls. The results establish the reproducibility and consistency of the effect of birthweight on balance skills and point to early intervention for individuals with lower birthweight to mitigate the impact of motor impairment. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Performance of International Classification of Diseases-based injury severity measures used to predict in-hospital mortality and intensive care admission among traumatic brain-injured patients.

    PubMed

    Gagné, Mathieu; Moore, Lynne; Sirois, Marie-Josée; Simard, Marc; Beaudoin, Claudia; Kuimi, Brice Lionel Batomen

    2017-02-01

    The International Classification of Diseases (ICD) is the main classification system used for population-based traumatic brain injury (TBI) surveillance activities but does not contain direct information on injury severity. International Classification of Diseases-based injury severity measures can be empirically derived or mapped to the Abbreviated Injury Scale, but no single approach has been formally recommended for TBI. The aim of this study was to compare the accuracy of different ICD-based injury severity measures for predicting in-hospital mortality and intensive care unit (ICU) admission in TBI patients. We conducted a population-based retrospective cohort study. We identified all patients 16 years or older with a TBI diagnosis who received acute care between April 1, 2006, and March 31, 2013, from the Quebec Hospital Discharge Database. The accuracy of five ICD-based injury severity measures for predicting mortality and ICU admission was compared using measures of discrimination (area under the receiver operating characteristic curve [AUC]) and calibration (calibration plot and the Hosmer-Lemeshow goodness-of-fit statistic). Of 31,087 traumatic brain-injured patients in the study population, 9.0% died in hospital, and 34.4% were admitted to the ICU. Among ICD-based severity measures that were assessed, the multiplied derivative of ICD-based Injury Severity Score (ICISS-Multiplicative) demonstrated the best discriminative ability for predicting in-hospital mortality (AUC, 0.858; 95% confidence interval, 0.852-0.864) and ICU admissions (AUC, 0.813; 95% confidence interval, 0.808-0.818). Calibration assessments showed good agreement between observed and predicted in-hospital mortality for ICISS measures. All severity measures presented high agreement between observed and expected probabilities of ICU admission for all deciles of risk. The ICD-based injury severity measures can be used to accurately predict in-hospital mortality and ICU admission in TBI patients. The ICISS-Multiplicative generally outperformed other ICD-based injury severity measures and should be preferred to control for differences in baseline characteristics between TBI patients in surveillance activities or injury research when only ICD codes are available. Prognostic study, level III.

  12. [Surgical treatment of gynecomastia: an algorithm].

    PubMed

    Wolter, A; Scholz, T; Diedrichson, J; Liebau, J

    2013-04-01

    Gynecomastia is a persistent benign uni- or bilateral enlargement of the male breast ranging from small to excessive findings with marked skin redundancy. In this paper we introduce an algorithm to facilitate the selection of the appropriate surgical technique according to the presented morphological aspects. The records of 118 patients (217 breasts) with gynecomastia from 01/2009 to 08/2012 were retrospectively reviewed. The authors conducted three different surgical techniques depending on four severity grades. The outcome parameters complication rate, patient satisfaction with the aesthetic result, nipple sensitivity and the need to re-operate were observed and related to the employed technique. In 167 (77%) breasts with moderate breast enlargement without skin redundancy (Grade I-IIa by Simon's classification) a subcutaneous semicircular periareolar mastectomy was performed in combination with water-jet assisted liposuction. In 40 (18%) breasts with skin redundancy (Grade IIb) a circumferential mastopexy was performed additionally. An inferior pedicled mammaplasty was used in 10 (5%) severe cases (Grade III). Complication rate was 4.1%. Surgical corrections were necessary in 17 breasts (7.8%). The patient survey revealed a high satisfaction level: 88% of the patients rated the aesthetic results as "very good" or "good", nipple sensitivity was rated as "very good" or "good" by 83%. Surgical treatment of gynecomastia should ensure minimal scarring while respecting the aesthetic unit. The selection of the appropriate surgical method depends on the severity grade, the presence of skin redundancy and the volume of the male breast glandular tissue. The presented algorithm rarely leads to complications, is simple to perform and shows a high satisfaction rate and a preservation of the nipple sensitivity. © Georg Thieme Verlag KG Stuttgart · New York.

  13. Attribute Weighting Based K-Nearest Neighbor Using Gain Ratio

    NASA Astrophysics Data System (ADS)

    Nababan, A. A.; Sitompul, O. S.; Tulus

    2018-04-01

    K- Nearest Neighbor (KNN) is a good classifier, but from several studies, the result performance accuracy of KNN still lower than other methods. One of the causes of the low accuracy produced, because each attribute has the same effect on the classification process, while some less relevant characteristics lead to miss-classification of the class assignment for new data. In this research, we proposed Attribute Weighting Based K-Nearest Neighbor Using Gain Ratio as a parameter to see the correlation between each attribute in the data and the Gain Ratio also will be used as the basis for weighting each attribute of the dataset. The accuracy of results is compared to the accuracy acquired from the original KNN method using 10-fold Cross-Validation with several datasets from the UCI Machine Learning repository and KEEL-Dataset Repository, such as abalone, glass identification, haberman, hayes-roth and water quality status. Based on the result of the test, the proposed method was able to increase the classification accuracy of KNN, where the highest difference of accuracy obtained hayes-roth dataset is worth 12.73%, and the lowest difference of accuracy obtained in the abalone dataset of 0.07%. The average result of the accuracy of all dataset increases the accuracy by 5.33%.

  14. Adaptive neuro-fuzzy inference systems for semi-automatic discrimination between seismic events: a study in Tehran region

    NASA Astrophysics Data System (ADS)

    Vasheghani Farahani, Jamileh; Zare, Mehdi; Lucas, Caro

    2012-04-01

    Thisarticle presents an adaptive neuro-fuzzy inference system (ANFIS) for classification of low magnitude seismic events reported in Iran by the network of Tehran Disaster Mitigation and Management Organization (TDMMO). ANFIS classifiers were used to detect seismic events using six inputs that defined the seismic events. Neuro-fuzzy coding was applied using the six extracted features as ANFIS inputs. Two types of events were defined: weak earthquakes and mining blasts. The data comprised 748 events (6289 signals) ranging from magnitude 1.1 to 4.6 recorded at 13 seismic stations between 2004 and 2009. We surveyed that there are almost 223 earthquakes with M ≤ 2.2 included in this database. Data sets from the south, east, and southeast of the city of Tehran were used to evaluate the best short period seismic discriminants, and features as inputs such as origin time of event, distance (source to station), latitude of epicenter, longitude of epicenter, magnitude, and spectral analysis (fc of the Pg wave) were used, increasing the rate of correct classification and decreasing the confusion rate between weak earthquakes and quarry blasts. The performance of the ANFIS model was evaluated for training and classification accuracy. The results confirmed that the proposed ANFIS model has good potential for determining seismic events.

  15. Prediction of pathologic staging with magnetic resonance imaging after preoperative chemoradiotherapy in rectal cancer: pooled analysis of KROG 10-01 and 11-02.

    PubMed

    Lee, Jong Hoon; Jang, Hong Seok; Kim, Jun-Gi; Lee, Myung Ah; Kim, Dae Yong; Kim, Tae Hyun; Oh, Jae Hwan; Park, Sung Chan; Kim, Sun Young; Baek, Ji Yeon; Park, Hee Chul; Kim, Hee Cheol; Nam, Taek-Keun; Chie, Eui Kyu; Jung, Ji-Han; Oh, Seong Taek

    2014-10-01

    The reported overall accuracy of MRI in predicting the pathologic stage of nonirradiated rectal cancer is high. However, the role of MRI in restaging rectal tumors after neoadjuvant CRT is contentious. Thus, we evaluate the accuracy of restaging magnetic resonance imaging (MRI) for rectal cancer patients who receive preoperative chemoradiotherapy (CRT). We analyzed 150 patients with locally advanced rectal cancer (T3-4N0-2) who had received preoperative CRT. Pre-CRT MRI was performed for local tumor and nodal staging. All patients underwent restaging MRI followed by total mesorectal excision after the end of radiotherapy. The primary endpoint of the present study was to estimate the accuracy of post-CRT MRI as compared with pathologic staging. Pathologic T classification matched the post-CRT MRI findings in 97 (64.7%) of 150 patients. 36 (24.0%) of 150 patients were overstaged in T classification, and the concordance degree was moderate (k=0.33, p<0.01). Pathologic N classification matched the post-CRI MRI findings in 85 (56.6%) of 150 patients. 54 (36.0%) of 150 patients were overstaged in N classification. 26 patients achieved downstaging (ycT0-2N0) on restaging MRI after CRT. 23 (88.5%) of 26 patients who had been downstaged on MRI after CRT were confirmed on the pathological staging, and the concordance degree was good (k=0.72, p<0.01). Restaging MRI has low accuracy for the prediction of the pathologic T and N classifications in rectal cancer patients who received preoperative CRT. The diagnostic accuracy of restaging MRI is relatively high in rectal cancer patients who achieved clinical downstaging after CRT. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. CANDELS Visual Classifications: Scheme, Data Release, and First Results

    NASA Technical Reports Server (NTRS)

    Kartaltepe, Jeyhan S.; Mozena, Mark; Kocevski, Dale; McIntosh, Daniel H.; Lotz, Jennifer; Bell, Eric F.; Faber, Sandy; Ferguson, Henry; Koo, David; Bassett, Robert; hide

    2014-01-01

    We have undertaken an ambitious program to visually classify all galaxies in the five CANDELS fields down to H <24.5 involving the dedicated efforts of 65 individual classifiers. Once completed, we expect to have detailed morphological classifications for over 50,000 galaxies spanning 0 < z < 4 over all the fields. Here, we present our detailed visual classification scheme, which was designed to cover a wide range of CANDELS science goals. This scheme includes the basic Hubble sequence types, but also includes a detailed look at mergers and interactions, the clumpiness of galaxies, k-corrections, and a variety of other structural properties. In this paper, we focus on the first field to be completed - GOODS-S, which has been classified at various depths. The wide area coverage spanning the full field (wide+deep+ERS) includes 7634 galaxies that have been classified by at least three different people. In the deep area of the field, 2534 galaxies have been classified by at least five different people at three different depths. With this paper, we release to the public all of the visual classifications in GOODS-S along with the Perl/Tk GUI that we developed to classify galaxies. We present our initial results here, including an analysis of our internal consistency and comparisons among multiple classifiers as well as a comparison to the Sersic index. We find that the level of agreement among classifiers is quite good and depends on both the galaxy magnitude and the galaxy type, with disks showing the highest level of agreement and irregulars the lowest. A comparison of our classifications with the Sersic index and restframe colors shows a clear separation between disk and spheroid populations. Finally, we explore morphological k-corrections between the V-band and H-band observations and find that a small fraction (84 galaxies in total) are classified as being very different between these two bands. These galaxies typically have very clumpy and extended morphology or are very faint in the V-band.

  17. A comparative evaluation of piezoelectric sensors for acoustic emission-based impact location estimation and damage classification in composite structures

    NASA Astrophysics Data System (ADS)

    Uprety, Bibhisha; Kim, Sungwon; Mathews, V. John; Adams, Daniel O.

    2015-03-01

    Acoustic Emission (AE) based Structural Health Monitoring (SHM) is of great interest for detecting impact damage in composite structures. Within the aerospace industry the need to detect and locate these events, even when no visible damage is present, is important both from the maintenance and design perspectives. In this investigation, four commercially available piezoelectric sensors were evaluated for usage in an AE-based SHM system. Of particular interest was comparing the acoustic response of the candidate piezoelectric sensors for impact location estimations as well as damage classification resulting from the impact in fiber-reinforced composite structures. Sensor assessment was performed based on response signal characterization and performance for active testing at 300 kHz and steel-ball drop testing using both aluminum and carbon/epoxy composite plates. Wave mode velocities calculated from the measured arrival times were found to be in good agreement with predictions obtained using both the Disperse code and finite element analysis. Differences in the relative strength of the received wave modes, the overall signal strengths and signal-to-noise ratios were observed through the use of both active testing as well as passive steel-ball drop testing. Further comparative is focusing on assessing AE sensor performance for use in impact location estimation algorithms as well as detecting and classifying damage produced in composite structures due to impact events.

  18. Bad-good constraints on a polarity correspondence account for the spatial-numerical association of response codes (SNARC) and markedness association of response codes (MARC) effects.

    PubMed

    Leth-Steensen, Craig; Citta, Richie

    2016-01-01

    Performance in numerical classification tasks involving either parity or magnitude judgements is quicker when small numbers are mapped onto a left-sided response and large numbers onto a right-sided response than for the opposite mapping (i.e., the spatial-numerical association of response codes or SNARC effect). Recent research by Gevers et al. [Gevers, W., Santens, S., Dhooge, E., Chen, Q., Van den Bossche, L., Fias, W., & Verguts, T. (2010). Verbal-spatial and visuospatial coding of number-space interactions. Journal of Experimental Psychology: General, 139, 180-190] suggests that this effect also arises for vocal "left" and "right" responding, indicating that verbal-spatial coding has a role to play in determining it. Another presumably verbal-based, spatial-numerical mapping phenomenon is the linguistic markedness association of response codes (MARC) effect whereby responding in parity tasks is quicker when odd numbers are mapped onto left-sided responses and even numbers onto right-sided responses. A recent account of both the SNARC and MARC effects is based on the polarity correspondence principle [Proctor, R. W., & Cho, Y. S. (2006). Polarity correspondence: A general principle for performance of speeded binary classification tasks. Psychological Bulletin, 132, 416-442]. This account assumes that stimulus and response alternatives are coded along any number of dimensions in terms of - and + polarities with quicker responding when the polarity codes for the stimulus and the response correspond. In the present study, even-odd parity judgements were made using either "left" and "right" or "bad" and "good" vocal responses. Results indicated that a SNARC effect was indeed present for the former type of vocal responding, providing further evidence for the sufficiency of the verbal-spatial coding account for this effect. However, the decided lack of an analogous SNARC-like effect in the results for the latter type of vocal responding provides an important constraint on the presumed generality of the polarity correspondence account. On the other hand, the presence of robust MARC effects for "bad" and "good" but not "left" and "right" vocal responses is consistent with the view that such effects are due to conceptual associations between semantic codes for odd-even and bad-good (but not necessarily left-right).

  19. Remote sensing based detection of forested wetlands: An evaluation of LiDAR, aerial imagery, and their data fusion

    NASA Astrophysics Data System (ADS)

    Suiter, Ashley Elizabeth

    Multi-spectral imagery provides a robust and low-cost dataset for assessing wetland extent and quality over broad regions and is frequently used for wetland inventories. However in forested wetlands, hydrology is obscured by tree canopy making it difficult to detect with multi-spectral imagery alone. Because of this, classification of forested wetlands often includes greater errors than that of other wetlands types. Elevation and terrain derivatives have been shown to be useful for modelling wetland hydrology. But, few studies have addressed the use of LiDAR intensity data detecting hydrology in forested wetlands. Due the tendency of LiDAR signal to be attenuated by water, this research proposed the fusion of LiDAR intensity data with LiDAR elevation, terrain data, and aerial imagery, for the detection of forested wetland hydrology. We examined the utility of LiDAR intensity data and determined whether the fusion of Lidar derived data with multispectral imagery increased the accuracy of forested wetland classification compared with a classification performed with only multi-spectral image. Four classifications were performed: Classification A -- All Imagery, Classification B -- All LiDAR, Classification C -- LiDAR without Intensity, and Classification D -- Fusion of All Data. These classifications were performed using random forest and each resulted in a 3-foot resolution thematic raster of forested upland and forested wetland locations in Vermilion County, Illinois. The accuracies of these classifications were compared using Kappa Coefficient of Agreement. Importance statistics produced within the random forest classifier were evaluated in order to understand the contribution of individual datasets. Classification D, which used the fusion of LiDAR and multi-spectral imagery as input variables, had moderate to strong agreement between reference data and classification results. It was found that Classification A performed using all the LiDAR data and its derivatives (intensity, elevation, slope, aspect, curvatures, and Topographic Wetness Index) was the most accurate classification with Kappa: 78.04%, indicating moderate to strong agreement. However, Classification C, performed with LiDAR derivative without intensity data had less agreement than would be expected by chance, indicating that LiDAR contributed significantly to the accuracy of Classification B.

  20. Automatic detection of sleep macrostructure based on a sensorized T-shirt.

    PubMed

    Bianchi, Anna M; Mendez, Martin O

    2010-01-01

    In the present work we apply a fully automatic procedure to the analysis of signal coming from a sensorized T-shit, worn during the night, for sleep evaluation. The goodness and reliability of the signals recorded trough the T-shirt was previously tested, while the employed algorithms for feature extraction and sleep classification were previously developed on standard ECG recordings and the obtained classification was compared to the standard clinical practice based on polysomnography (PSG). In the present work we combined T-shirt recordings and automatic classification and could obtain reliable sleep profiles, i.e. the sleep classification in WAKE, REM (rapid eye movement) and NREM stages, based on heart rate variability (HRV), respiration and movement signals.

  1. Understanding overlay signatures using machine learning on non-lithography context information

    NASA Astrophysics Data System (ADS)

    Overcast, Marshall; Mellegaard, Corey; Daniel, David; Habets, Boris; Erley, Georg; Guhlemann, Steffen; Thrun, Xaver; Buhl, Stefan; Tottewitz, Steven

    2018-03-01

    Overlay errors between two layers can be caused by non-lithography processes. While these errors can be compensated by the run-to-run system, such process and tool signatures are not always stable. In order to monitor the impact of non-lithography context on overlay at regular intervals, a systematic approach is needed. Using various machine learning techniques, significant context parameters that relate to deviating overlay signatures are automatically identified. Once the most influential context parameters are found, a run-to-run simulation is performed to see how much improvement can be obtained. The resulting analysis shows good potential for reducing the influence of hidden context parameters on overlay performance. Non-lithographic contexts are significant contributors, and their automatic detection and classification will enable the overlay roadmap, given the corresponding control capabilities.

  2. Feature weight estimation for gene selection: a local hyperlinear learning approach

    PubMed Central

    2014-01-01

    Background Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers. Results We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB). Conclusion Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. PMID:24625071

  3. The neuropsychology of male adults with high-functioning autism or asperger syndrome.

    PubMed

    Wilson, C Ellie; Happé, Francesca; Wheelwright, Sally J; Ecker, Christine; Lombardo, Michael V; Johnston, Patrick; Daly, Eileen; Murphy, Clodagh M; Spain, Debbie; Lai, Meng-Chuan; Chakrabarti, Bhismadev; Sauter, Disa A; Baron-Cohen, Simon; Murphy, Declan G M

    2014-10-01

    Autism Spectrum Disorder (ASD) is diagnosed on the basis of behavioral symptoms, but cognitive abilities may also be useful in characterizing individuals with ASD. One hundred seventy-eight high-functioning male adults, half with ASD and half without, completed tasks assessing IQ, a broad range of cognitive skills, and autistic and comorbid symptomatology. The aims of the study were, first, to determine whether significant differences existed between cases and controls on cognitive tasks, and whether cognitive profiles, derived using a multivariate classification method with data from multiple cognitive tasks, could distinguish between the two groups. Second, to establish whether cognitive skill level was correlated with degree of autistic symptom severity, and third, whether cognitive skill level was correlated with degree of comorbid psychopathology. Fourth, cognitive characteristics of individuals with Asperger Syndrome (AS) and high-functioning autism (HFA) were compared. After controlling for IQ, ASD and control groups scored significantly differently on tasks of social cognition, motor performance, and executive function (P's < 0.05). To investigate cognitive profiles, 12 variables were entered into a support vector machine (SVM), which achieved good classification accuracy (81%) at a level significantly better than chance (P < 0.0001). After correcting for multiple correlations, there were no significant associations between cognitive performance and severity of either autistic or comorbid symptomatology. There were no significant differences between AS and HFA groups on the cognitive tasks. Cognitive classification models could be a useful aid to the diagnostic process when used in conjunction with other data sources-including clinical history. © 2014 International Society for Autism Research, Wiley Periodicals, Inc.

  4. Radio Galaxy Zoo: compact and extended radio source classification with deep learning

    NASA Astrophysics Data System (ADS)

    Lukic, V.; Brüggen, M.; Banfield, J. K.; Wong, O. I.; Rudnick, L.; Norris, R. P.; Simmons, B.

    2018-05-01

    Machine learning techniques have been increasingly useful in astronomical applications over the last few years, for example in the morphological classification of galaxies. Convolutional neural networks have proven to be highly effective in classifying objects in image data. In the context of radio-interferometric imaging in astronomy, we looked for ways to identify multiple components of individual sources. To this effect, we design a convolutional neural network to differentiate between different morphology classes using sources from the Radio Galaxy Zoo (RGZ) citizen science project. In this first step, we focus on exploring the factors that affect the performance of such neural networks, such as the amount of training data, number and nature of layers, and the hyperparameters. We begin with a simple experiment in which we only differentiate between two extreme morphologies, using compact and multiple-component extended sources. We found that a three-convolutional layer architecture yielded very good results, achieving a classification accuracy of 97.4 per cent on a test data set. The same architecture was then tested on a four-class problem where we let the network classify sources into compact and three classes of extended sources, achieving a test accuracy of 93.5 per cent. The best-performing convolutional neural network set-up has been verified against RGZ Data Release 1 where a final test accuracy of 94.8 per cent was obtained, using both original and augmented images. The use of sigma clipping does not offer a significant benefit overall, except in cases with a small number of training images.

  5. Reducing Sweeping Frequencies in Microwave NDT Employing Machine Learning Feature Selection

    PubMed Central

    Moomen, Abdelniser; Ali, Abdulbaset; Ramahi, Omar M.

    2016-01-01

    Nondestructive Testing (NDT) assessment of materials’ health condition is useful for classifying healthy from unhealthy structures or detecting flaws in metallic or dielectric structures. Performing structural health testing for coated/uncoated metallic or dielectric materials with the same testing equipment requires a testing method that can work on metallics and dielectrics such as microwave testing. Reducing complexity and expenses associated with current diagnostic practices of microwave NDT of structural health requires an effective and intelligent approach based on feature selection and classification techniques of machine learning. Current microwave NDT methods in general based on measuring variation in the S-matrix over the entire operating frequency ranges of the sensors. For instance, assessing the health of metallic structures using a microwave sensor depends on the reflection or/and transmission coefficient measurements as a function of the sweeping frequencies of the operating band. The aim of this work is reducing sweeping frequencies using machine learning feature selection techniques. By treating sweeping frequencies as features, the number of top important features can be identified, then only the most influential features (frequencies) are considered when building the microwave NDT equipment. The proposed method of reducing sweeping frequencies was validated experimentally using a waveguide sensor and a metallic plate with different cracks. Among the investigated feature selection techniques are information gain, gain ratio, relief, chi-squared. The effectiveness of the selected features were validated through performance evaluations of various classification models; namely, Nearest Neighbor, Neural Networks, Random Forest, and Support Vector Machine. Results showed good crack classification accuracy rates after employing feature selection algorithms. PMID:27104533

  6. Crystallization tendency of active pharmaceutical ingredients following rapid solvent evaporation--classification and comparison with crystallization tendency from undercooled melts.

    PubMed

    Van Eerdenbrugh, Bernard; Baird, Jared A; Taylor, Lynne S

    2010-09-01

    In this study, the crystallization behavior of a variety of compounds was studied following rapid solvent evaporation using spin coating. Initial screening to determine model compound suitability was performed using a structurally diverse set of 51 compounds in three different solvent systems [dichloromethane (DCM), a 1:1 (w/w) dichloromethane/ethanol mixture (MIX), and ethanol (EtOH)]. Of this starting set of 153 drug-solvent combinations, 93 (40 compounds) were selected for further evaluation based on solubility, chemical solution stability, and processability criteria. These systems were spin coated and their crystallization was monitored using polarized light microscopy (7 days, dry conditions). The crystallization behavior of the samples could be classified as rapid (Class I: 39 cases), intermediate (Class II: 23 cases), or slow (Class III: 31 cases). The solvent system employed influenced the classification outcome for only four of the compounds. The various compounds showed very diverse crystallization behavior. Upon comparison of classification results with those of a previous study, where cooling from the melt was used as a preparation technique, a good similarity was found whereby 68% of the cases were identically classified. Multivariate analysis was performed using a set of relevant physicochemical compound characteristics. It was found that a number of these parameters tended to differ between the different classes. These could be further interpreted in terms of the nature of the crystallization process. Additional multivariate analysis on the separate classes of compounds indicated some potential in predicting the crystallization tendency of a given compound.

  7. 42 CFR 413.333 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... PROSPECTIVELY DETERMINED PAYMENT RATES FOR SKILLED NURSING FACILITIES Prospective Payment for Skilled Nursing... goods and services included in covered skilled nursing services. Resident classification system means a...

  8. 42 CFR 413.333 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... PROSPECTIVELY DETERMINED PAYMENT RATES FOR SKILLED NURSING FACILITIES Prospective Payment for Skilled Nursing... goods and services included in covered skilled nursing services. Resident classification system means a...

  9. Differential diagnosis of pleural mesothelioma using Logic Learning Machine.

    PubMed

    Parodi, Stefano; Filiberti, Rosa; Marroni, Paola; Libener, Roberta; Ivaldi, Giovanni Paolo; Mussap, Michele; Ferrari, Enrico; Manneschi, Chiara; Montani, Erika; Muselli, Marco

    2015-01-01

    Tumour markers are standard tools for the differential diagnosis of cancer. However, the occurrence of nonspecific symptoms and different malignancies involving the same cancer site may lead to a high proportion of misclassifications. Classification accuracy can be improved by combining information from different markers using standard data mining techniques, like Decision Tree (DT), Artificial Neural Network (ANN), and k-Nearest Neighbour (KNN) classifier. Unfortunately, each method suffers from some unavoidable limitations. DT, in general, tends to show a low classification performance, whereas ANN and KNN produce a "black-box" classification that does not provide biological information useful for clinical purposes. Logic Learning Machine (LLM) is an innovative method of supervised data analysis capable of building classifiers described by a set of intelligible rules including simple conditions in their antecedent part. It is essentially an efficient implementation of the Switching Neural Network model and reaches excellent classification accuracy while keeping low the computational demand. LLM was applied to data from a consecutive cohort of 169 patients admitted for diagnosis to two pulmonary departments in Northern Italy from 2009 to 2011. Patients included 52 malignant pleural mesotheliomas (MPM), 62 pleural metastases (MTX) from other tumours and 55 benign diseases (BD) associated with pleurisies. Concentration of three tumour markers (CEA, CYFRA 21-1 and SMRP) was measured in the pleural fluid of each patient and a cytological examination was also carried out. The performance of LLM and that of three competing methods (DT, KNN and ANN) was assessed by leave-one-out cross-validation. LLM outperformed all other considered methods. Global accuracy was 77.5% for LLM, 72.8% for DT, 54.4% for KNN, and 63.9% for ANN, respectively. In more details, LLM correctly classified 79% of MPM, 66% of MTX and 89% of BD. The corresponding figures for DT were: MPM = 83%, MTX = 55% and BD = 84%; for KNN: MPM = 58%, MTX = 45%, BD = 62%; for ANN: MPM = 71%, MTX = 47%, BD = 76%. Finally, LLM provided classification rules in a very good agreement with a priori knowledge about the biological role of the considered tumour markers. LLM is a new flexible tool potentially useful for the differential diagnosis of pleural mesothelioma.

  10. Ensemble of sparse classifiers for high-dimensional biological data.

    PubMed

    Kim, Sunghan; Scalzo, Fabien; Telesca, Donatello; Hu, Xiao

    2015-01-01

    Biological data are often high in dimension while the number of samples is small. In such cases, the performance of classification can be improved by reducing the dimension of data, which is referred to as feature selection. Recently, a novel feature selection method has been proposed utilising the sparsity of high-dimensional biological data where a small subset of features accounts for most variance of the dataset. In this study we propose a new classification method for high-dimensional biological data, which performs both feature selection and classification within a single framework. Our proposed method utilises a sparse linear solution technique and the bootstrap aggregating algorithm. We tested its performance on four public mass spectrometry cancer datasets along with two other conventional classification techniques such as Support Vector Machines and Adaptive Boosting. The results demonstrate that our proposed method performs more accurate classification across various cancer datasets than those conventional classification techniques.

  11. Localization of glossopharyngeal obstruction using nasopharyngeal tube versus Friedman tongue position classification in obstructive sleep apnea hypopnea syndrome.

    PubMed

    Li, Shuhua; Hei, Renyi; Wu, Dahai; Shi, Hongjin

    2014-08-01

    Assessing the severity of glossopharyngeal obstruction is important for the diagnosis and therapy of obstructive sleep apnea hypopnea syndrome (OSAHS). The polysomnography (PSG) with nasopharyngeal tube insertion (NPT-PSG) has shown good results in assessing glossopharyngeal obstruction. The objective of this study was to compare NPT-PSG with Friedman tongue position (FTP) classification which was also used to evaluate the glossopharyngeal obstruction. One hundred and five patients with OSAHS diagnosed by PSG were included in the study. All the patients were successfully examined by NPT-PSG. Based on the grade of FTP classification, 105 patients were divided into four groups. The differences of the general clinical data, PSG and NPT-PSG results were analyzed among the four groups. And the coincidence of diagnosing glossopharyngeal obstruction of two methods was calculated. There was no significant difference among the four groups in general clinical data and PSG results. However, NPT-PSG results were significantly different among the four groups. Following with the increasing FTP, apnea hypopnea index increased and lowest saturation of blood oxygen decreased. There were 38 patients with and other 38 patients without glossopharyngeal obstruction diagnosed by both methods. The coincidence of two methods was 72.4 %. NPT-PSG is an easy and effective method in assessing the severity of glossopharyngeal obstruction. The coincidence between the NPT-PSG and FTP classification is good. But in some special OSAHS patients such as glossoptosis, unsuccessful uvulopalatopharyngoplasty or suspicious pachyglossia, NPT-PSG is better than FTP classification.

  12. 32 CFR 2001.16 - Fundamental classification guidance review.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Fundamental classification guidance review. 2001... INFORMATION Classification § 2001.16 Fundamental classification guidance review. (a) Performance of fundamental classification guidance reviews. An initial fundamental classification guidance review shall be...

  13. A statistical framework for evaluating neural networks to predict recurrent events in breast cancer

    NASA Astrophysics Data System (ADS)

    Gorunescu, Florin; Gorunescu, Marina; El-Darzi, Elia; Gorunescu, Smaranda

    2010-07-01

    Breast cancer is the second leading cause of cancer deaths in women today. Sometimes, breast cancer can return after primary treatment. A medical diagnosis of recurrent cancer is often a more challenging task than the initial one. In this paper, we investigate the potential contribution of neural networks (NNs) to support health professionals in diagnosing such events. The NN algorithms are tested and applied to two different datasets. An extensive statistical analysis has been performed to verify our experiments. The results show that a simple network structure for both the multi-layer perceptron and radial basis function can produce equally good results, not all attributes are needed to train these algorithms and, finally, the classification performances of all algorithms are statistically robust. Moreover, we have shown that the best performing algorithm will strongly depend on the features of the datasets, and hence, there is not necessarily a single best classifier.

  14. The Fracturing of China? Ethnic Separatism and Political Violence in the Xinjiang Uyghur Autonomous Region

    DTIC Science & Technology

    2007-09-01

    deprivation, rational choice 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18 . SECURITY CLASSIFICATION OF THIS PAGE...Prescribed by ANSI Std. 239- 18 ii THIS PAGE INTENTIONALLY LEFT BLANK iii Approved for public release; distribution is unlimited. THE...psychological, or erotic in nature.10 This argument purports that when individuals participate within a group for the advancement of collective good, they

  15. Improved Frame Mode Selection for AMR-WB+ Based on Decision Tree

    NASA Astrophysics Data System (ADS)

    Kim, Jong Kyu; Kim, Nam Soo

    In this letter, we propose a coding mode selection method for the AMR-WB+ audio coder based on a decision tree. In order to reduce computation while maintaining good performance, decision tree classifier is adopted with the closed loop mode selection results as the target classification labels. The size of the decision tree is controlled by pruning, so the proposed method does not increase the memory requirement significantly. Through an evaluation test on a database covering both speech and music materials, the proposed method is found to achieve a much better mode selection accuracy compared with the open loop mode selection module in the AMR-WB+.

  16. On the evaluation of the fidelity of supervised classifiers in the prediction of chimeric RNAs.

    PubMed

    Beaumeunier, Sacha; Audoux, Jérôme; Boureux, Anthony; Ruffle, Florence; Commes, Thérèse; Philippe, Nicolas; Alves, Ronnie

    2016-01-01

    High-throughput sequencing technology and bioinformatics have identified chimeric RNAs (chRNAs), raising the possibility of chRNAs expressing particularly in diseases can be used as potential biomarkers in both diagnosis and prognosis. The task of discriminating true chRNAs from the false ones poses an interesting Machine Learning (ML) challenge. First of all, the sequencing data may contain false reads due to technical artifacts and during the analysis process, bioinformatics tools may generate false positives due to methodological biases. Moreover, if we succeed to have a proper set of observations (enough sequencing data) about true chRNAs, chances are that the devised model can not be able to generalize beyond it. Like any other machine learning problem, the first big issue is finding the good data to build models. As far as we were concerned, there is no common benchmark data available for chRNAs detection. The definition of a classification baseline is lacking in the related literature too. In this work we are moving towards benchmark data and an evaluation of the fidelity of supervised classifiers in the prediction of chRNAs. We proposed a modelization strategy that can be used to increase the tools performances in context of chRNA classification based on a simulated data generator, that permit to continuously integrate new complex chimeric events. The pipeline incorporated a genome mutation process and simulated RNA-seq data. The reads within distinct depth were aligned and analysed by CRAC that integrates genomic location and local coverage, allowing biological predictions at the read scale. Additionally, these reads were functionally annotated and aggregated to form chRNAs events, making it possible to evaluate ML methods (classifiers) performance in both levels of reads and events. Ensemble learning strategies demonstrated to be more robust to this classification problem, providing an average AUC performance of 95 % (ACC=94 %, Kappa=0.87 %). The resulting classification models were also tested on real RNA-seq data from a set of twenty-seven patients with acute myeloid leukemia (AML).

  17. Cough event classification by pretrained deep neural network.

    PubMed

    Liu, Jia-Ming; You, Mingyu; Wang, Zheng; Li, Guo-Zheng; Xu, Xianghuai; Qiu, Zhongmin

    2015-01-01

    Cough is an essential symptom in respiratory diseases. In the measurement of cough severity, an accurate and objective cough monitor is expected by respiratory disease society. This paper aims to introduce a better performed algorithm, pretrained deep neural network (DNN), to the cough classification problem, which is a key step in the cough monitor. The deep neural network models are built from two steps, pretrain and fine-tuning, followed by a Hidden Markov Model (HMM) decoder to capture tamporal information of the audio signals. By unsupervised pretraining a deep belief network, a good initialization for a deep neural network is learned. Then the fine-tuning step is a back propogation tuning the neural network so that it can predict the observation probability associated with each HMM states, where the HMM states are originally achieved by force-alignment with a Gaussian Mixture Model Hidden Markov Model (GMM-HMM) on the training samples. Three cough HMMs and one noncough HMM are employed to model coughs and noncoughs respectively. The final decision is made based on viterbi decoding algorihtm that generates the most likely HMM sequence for each sample. A sample is labeled as cough if a cough HMM is found in the sequence. The experiments were conducted on a dataset that was collected from 22 patients with respiratory diseases. Patient dependent (PD) and patient independent (PI) experimental settings were used to evaluate the models. Five criteria, sensitivity, specificity, F1, macro average and micro average are shown to depict different aspects of the models. From overall evaluation criteria, the DNN based methods are superior to traditional GMM-HMM based method on F1 and micro average with maximal 14% and 11% error reduction in PD and 7% and 10% in PI, meanwhile keep similar performances on macro average. They also surpass GMM-HMM model on specificity with maximal 14% error reduction on both PD and PI. In this paper, we tried pretrained deep neural network in cough classification problem. Our results showed that comparing with the conventional GMM-HMM framework, the HMM-DNN could get better overall performance on cough classification task.

  18. How does the ball influence the performance of change of direction and sprint tests in para-footballers with brain impairments? Implications for evidence-based classification in CP-Football

    PubMed Central

    2017-01-01

    The aims of this study were: i) to analyze the reliability and validity of three tests that require sprinting (10 m, 25 m, 40 m), accelerations/decelerations (Stop and Go Test) and change of direction (Illinois Agility Test), with and without ball, in para-footballers with neurological impairments, and ii) to compare the performance in the tests when ball dribbling is required and to explore the practical implications for evidence-based classification in cerebral palsy (CP)-Football. Eighty-two international para-footballers (25.2 ± 6.8 years; 68.7 ± 8.3 kg; 175.3 ± 7.4 cm; 22.5 ± 2.7 kg·m-2), classified according to the International Federation of Cerebral Palsy Football (IFCPF) Classification Rules (classes FT5-FT8), participated in the study. A group of 31 players without CP was also included in the study as a control group. The para-footballers showed good reliability scores in all tests, with and without ball (ICC = 0.53–0.95, SEM = 2.5–9.8%). Nevertheless, the inclusion of the ball influenced testing reproducibility. The low or moderate relationships shown among sprint, acceleration/deceleration and change of direction tests with and without ball also evidenced that they measure different capabilities. Significant differences and large effect sizes (0.53 < ηp2 < 0.97; p < 0.05) were found when para-footballers performed the tests with and without dribbling the ball. Players with moderate neurological impairments (i.e. FT5, FT6, and FT7) had higher coefficients of variation in the trial requiring ball dribbling. For all the tests, we also obtained between-group (FT5-FT8) statistical and large practical differences (ηp2 = 0.35–0.62, large; p < 0.01). The proposed sprint, acceleration/deceleration and change of direction tests with and without ball may be applicable for classification purposes, that is, evaluation of activity limitation from neurological impairments, or decision-making between current CP-Football classes. PMID:29099836

  19. Using USEPA'S Final Ecosystem Goods and Services (FEGS) Classification With EnviroAtlas As The Basic Framework To Describe Natures Benefits and Beneficiaries

    EPA Science Inventory

    Ecosystem Services have received increasing scientific focus for a decade, yet the natural and social scientists working on mainstreaming these concepts are still struggling with the task. FEGS (Final Ecosystem Goods and Services) are an informative and useful concept as they emb...

  20. The multiscale classification system and grid encoding mode of ecological land in China

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Liu, Aixia; Lin, Yifan

    2017-10-01

    Ecological land provides goods and services that have direct or indirect benefic to eco-environment and human welfare. In recent years, researches on ecological land have become important in the field of land changes and ecosystem management. In the study, a multi-scale classification scheme of ecological land was developed for land management based on combination of the land-use classification and the ecological function zoning in China, including eco-zone, eco-region, eco-district, land ecosystem, and ecological land-use type. The geographical spatial unit leads toward greater homogeneity from macro to micro scale. The term "ecological land-use type" is the smallest one, being important to maintain the key ecological processes in land ecosystem. Ecological land-use type was categorized into main-functional and multi-functional ecological land-use type according to its ecological function attributes and production function attributes. Main-functional type was defined as one kind of land-use type mainly providing ecological goods and function attributes, such as river, lake, swampland, shoaly land, glacier and snow, while multi-functional type not only providing ecological goods and function attributes but also productive goods and function attributes, such as arable land, forestry land, and grassland. Furthermore, a six-level grid encoding mode was proposed for modern management of ecological land and data update under cadastral encoding. The six-level irregular grid encoding from macro to micro scale included eco-zone, eco-region, eco-district, cadastral area, land ecosystem, land ownership type, ecological land-use type, and parcel. Besides, the methodologies on ecosystem management were discussed for integrated management of natural resources in China.

  1. Implementing Safety Measures

    EPA Pesticide Factsheets

    Required risk mitigation measures for soil fumigants protect handlers, applicators, and bystanders from pesticide exposure. Measures include buffer zones, sign posting, good agricultural practices, restricted use pesticide classification, and FMPs.

  2. Serotonin transporter bi- and triallelic genotypes and their relationship with anxiety and academic performance: a preliminary study.

    PubMed

    Calapoğlu, Mustafa; Sahin-Calapoğlu, Nilufer; Karaçöp, Ataman; Soyöz, Mustafa; Elyıldırım, Umit Y; Avşaroğlu, Selahattin

    2011-01-01

    Considerable evidence suggests that variation of the serotonin-transporter-linked promoter region (5- HTTLPR) is associated with anxiety-related traits. Academic outcomes are also more closely related to trait anxiety. This preliminary study aimed to explore the association between academic performance and levels of anxiety with respect to the bi- and triallelic classification of 5-HTTLPR polymorphism of the 5-HTT gene in teacher candidates. In our study, Spielberger's State-Trait Anxiety Inventory, the Selection Examination for Professional Posts in Public Organizations (KPSS) and 5-HTTLPR genotypes were used to investigate a group of 94 healthy teacher candidates. Higher anxiety scores were significantly associated with the S'S' genotype. There was no direct, statistically significant association between academic performance and genotypic groups regarding bi- and triallelic classification. However, the students who have L'L' or LL genotypes had the lowest levels of trait anxiety and the poorest academic performance. Additionally, there was a significant positive correlation between academic performance and anxiety levels. These findings support the idea that S and L(G) alleles are associated with anxiety-related traits, and that the S'S' genotype may be a good indicator for anxiety-related traits in a sample from the Turkish population. A specific degree of anxiety is considered to be a motivation for learning and high academic performance. However, 5-HTTLPR polymorphism of the 5-HTT gene may be one of the genetic factors affecting academic performance in connection with anxiety levels. Implications for incorporating anxiety management training in the educational process in terms of both environmental and individual factors will have a very important role in improving effective strategies for student personality services, as well as for development and planning. © 2010 S. Karger AG, Basel.

  3. A novel method to guide classification of para swimmers with limb deficiency.

    PubMed

    Hogarth, Luke; Payton, Carl; Van de Vliet, Peter; Connick, Mark; Burkett, Brendan

    2018-05-30

    The International Paralympic Committee has directed International Federations that govern Para sports to develop evidence-based classification systems. This study defined the impact of limb deficiency impairment on 100 m freestyle performance to guide an evidence-based classification system in Para Swimming, which will be implemented following the 2020 Tokyo Paralympic games. Impairment data and competitive race performances of 90 international swimmers with limb deficiency were collected. Ensemble partial least squares regression established the relationship between relative limb length measures and competitive 100 m freestyle performance. The model explained 80% of the variance in 100 m freestyle performance, and found hand length and forearm length to be the most important predictors of performance. Based on the results of this model, Para swimmers were clustered into four-, five-, six- and seven-class structures using nonparametric kernel density estimations. The validity of these classification structures, and effectiveness against the current classification system, were examined by establishing within-class variations in 100 m freestyle performance and differences between adjacent classes. The derived classification structures were found to be more effective than current classification based on these criteria. This study provides a novel method that can be used to improve the objectivity and transparency of decision-making in Para sport classification. Expert consensus from experienced coaches, Para swimmers, classifiers and sport science and medicine personnel will benefit the translation of these findings into a revised classification system that is accepted by the Para swimming community. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gissi, Andrea; Dipartimento di Farmacia – Scienze del Farmaco, Università degli Studi di Bari “Aldo Moro”, Via E. Orabona 4, 70125 Bari; Lombardo, Anna

    The bioconcentration factor (BCF) is an important bioaccumulation hazard assessment metric in many regulatory contexts. Its assessment is required by the REACH regulation (Registration, Evaluation, Authorization and Restriction of Chemicals) and by CLP (Classification, Labeling and Packaging). We challenged nine well-known and widely used BCF QSAR models against 851 compounds stored in an ad-hoc created database. The goodness of the regression analysis was assessed by considering the determination coefficient (R{sup 2}) and the Root Mean Square Error (RMSE); Cooper's statistics and Matthew's Correlation Coefficient (MCC) were calculated for all the thresholds relevant for regulatory purposes (i.e. 100 L/kg for Chemicalmore » Safety Assessment; 500 L/kg for Classification and Labeling; 2000 and 5000 L/kg for Persistent, Bioaccumulative and Toxic (PBT) and very Persistent, very Bioaccumulative (vPvB) assessment) to assess the classification, with particular attention to the models' ability to control the occurrence of false negatives. As a first step, statistical analysis was performed for the predictions of the entire dataset; R{sup 2}>0.70 was obtained using CORAL, T.E.S.T. and EPISuite Arnot–Gobas models. As classifiers, ACD and log P-based equations were the best in terms of sensitivity, ranging from 0.75 to 0.94. External compound predictions were carried out for the models that had their own training sets. CORAL model returned the best performance (R{sup 2}{sub ext}=0.59), followed by the EPISuite Meylan model (R{sup 2}{sub ext}=0.58). The latter gave also the highest sensitivity on external compounds with values from 0.55 to 0.85, depending on the thresholds. Statistics were also compiled for compounds falling into the models Applicability Domain (AD), giving better performances. In this respect, VEGA CAESAR was the best model in terms of regression (R{sup 2}=0.94) and classification (average sensitivity>0.80). This model also showed the best regression (R{sup 2}=0.85) and sensitivity (average>0.70) for new compounds in the AD but not present in the training set. However, no single optimal model exists and, thus, it would be wise a case-by-case assessment. Yet, integrating the wealth of information from multiple models remains the winner approach. - Highlights: • REACH encourages the use of in silico methods in the assessment of chemicals safety. • The performances of nine BCF models were evaluated on a benchmark database of 851 chemicals. • We compared the models on the basis of both regression and classification performance. • Statistics on chemicals out of the training set and/or within the applicability domain were compiled. • The results show that QSAR models are useful as weight-of-evidence in support to other methods.« less

  5. Classification problems of Mount Kenya soils

    NASA Astrophysics Data System (ADS)

    Mutuma, Evans; Csorba, Ádám; Wawire, Amos; Dobos, Endre; Michéli, Erika

    2017-04-01

    Soil sampling on the agricultural lands covering 1200 square kilometers in the Eastern part of Mount Kenya was carried out to assess the status of soil organic carbon (SOC) as a soil fertility indicator, and to create an up-to-date soil classification map. The geology of the area consists of volcanic rocks and recent superficial deposits. The volcanic rocks are related to the Pliocene time; mainly: lahars, phonolites, tuffs, basalt and ashes. A total of 28 open profiles and 49 augered profiles with 269 samples were collected. The samples were analyzed for total carbon, organic carbon, particle size distribution, percent bases, cation exchange capacity and pH among other parameters. The objective of the study was to evaluate the variability of SOC in different Reference Soil Groups (RGS) and to compare the determined classification units with the KENSOTER database. Soil classification was performed based on the World Reference Base (WRB) for Soil Resources 2014. Based on the earlier surveys, geological and environmental setting, Nitisols were expected to be the dominant soils of the sampled area. However, this was not the case. The major differences to earlier survey data (KENSOTER database) are the presence of high activity clays (CEC value range 27.6 cmol/kg - 70 cmol/kg), high silt content (range 32.6 % - 52.4 %) and silt/clay ratio (range of 0.6 - 1.4) keeping these soils out of the Nitisols RSG. There was good accordance in the morphological features with the earlier survey but failed the silt/clay ratio criteria for Nitisols. This observation calls attention to set new classification criteria for Nitisols and other soils of warm, humid regions with variable rate of weathering to avoid difficulties in interpretation. To address the classification problem, this paper further discusses the taxonomic relationships between the studied soils. On the contrary most of the diagnostic elements (like the presence Umbric horizon, Vitric and Andic properties) and the some qualifiers (Humic, Dystric, Clayic, Skeletic, Leptic, etc) represent useful information for land use and management in the area.

  6. Multi-element analysis of wines by ICP-MS and ICP-OES and their classification according to geographical origin in Slovenia.

    PubMed

    Selih, Vid S; Sala, Martin; Drgan, Viktor

    2014-06-15

    Inductively coupled plasma mass spectrometry and optical emission were used to determine the multi-element composition of 272 bottled Slovenian wines. To achieve geographical classification of the wines by their elemental composition, principal component analysis (PCA) and counter-propagation artificial neural networks (CPANN) have been used. From 49 elements measured, 19 were used to build the final classification models. CPANN was used for the final predictions because of its superior results. The best model gave 82% correct predictions for external set of the white wine samples. Taking into account the small size of whole Slovenian wine growing regions, we consider the classification results were very good. For the red wines, which were mostly represented from one region, even-sub region classification was possible with great precision. From the level maps of the CPANN model, some of the most important elements for classification were identified. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Dollar Summary of Federal Supply Classification and Service Category by Company, FY83, Part 4 (AA92-N063).

    DTIC Science & Technology

    1983-01-01

    INTEGRATED SYSTEMS ANALYSTS INC VIRGINIA NAVY RDTE/ELECTRONICS AND COMMUNICATION EQUIP 964 INTERACTIVE SYSTEMS INC MICHIGAN USAF ROTE/ELECTRONICS AND...HARD GOODS INTERACTIVE SYSTEMS INC COLORADO ARMY RDTE/MISCELLANEOUS HARD GOODS LITTON SYSTEMS INC NEW JERSEY NAVY RDTE/MISCELLANEOUS HARD GOODS I...DEFENSE 88 INTELLIGENT SY STATE TOTAL 114 INTERACTIVE TE CO CALIFORNIA ARMY RDTE/OTHER DEFENSE 201 INTERNATIONAL I NAVY ROTE/OTHER DEFENSE 778 USAF RDTE

  8. A canonical correlation analysis based EMG classification algorithm for eliminating electrode shift effect.

    PubMed

    Zhe Fan; Zhong Wang; Guanglin Li; Ruomei Wang

    2016-08-01

    Motion classification system based on surface Electromyography (sEMG) pattern recognition has achieved good results in experimental condition. But it is still a challenge for clinical implement and practical application. Many factors contribute to the difficulty of clinical use of the EMG based dexterous control. The most obvious and important is the noise in the EMG signal caused by electrode shift, muscle fatigue, motion artifact, inherent instability of signal and biological signals such as Electrocardiogram. In this paper, a novel method based on Canonical Correlation Analysis (CCA) was developed to eliminate the reduction of classification accuracy caused by electrode shift. The average classification accuracy of our method were above 95% for the healthy subjects. In the process, we validated the influence of electrode shift on motion classification accuracy and discovered the strong correlation with correlation coefficient of >0.9 between shift position data and normal position data.

  9. New feature extraction method for classification of agricultural products from x-ray images

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.

    1999-01-01

    Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.

  10. Pathway activity inference for multiclass disease classification through a mathematical programming optimisation framework.

    PubMed

    Yang, Lingjian; Ainali, Chrysanthi; Tsoka, Sophia; Papageorgiou, Lazaros G

    2014-12-05

    Applying machine learning methods on microarray gene expression profiles for disease classification problems is a popular method to derive biomarkers, i.e. sets of genes that can predict disease state or outcome. Traditional approaches where expression of genes were treated independently suffer from low prediction accuracy and difficulty of biological interpretation. Current research efforts focus on integrating information on protein interactions through biochemical pathway datasets with expression profiles to propose pathway-based classifiers that can enhance disease diagnosis and prognosis. As most of the pathway activity inference methods in literature are either unsupervised or applied on two-class datasets, there is good scope to address such limitations by proposing novel methodologies. A supervised multiclass pathway activity inference method using optimisation techniques is reported. For each pathway expression dataset, patterns of its constituent genes are summarised into one composite feature, termed pathway activity, and a novel mathematical programming model is proposed to infer this feature as a weighted linear summation of expression of its constituent genes. Gene weights are determined by the optimisation model, in a way that the resulting pathway activity has the optimal discriminative power with regards to disease phenotypes. Classification is then performed on the resulting low-dimensional pathway activity profile. The model was evaluated through a variety of published gene expression profiles that cover different types of disease. We show that not only does it improve classification accuracy, but it can also perform well in multiclass disease datasets, a limitation of other approaches from the literature. Desirable features of the model include the ability to control the maximum number of genes that may participate in determining pathway activity, which may be pre-specified by the user. Overall, this work highlights the potential of building pathway-based multi-phenotype classifiers for accurate disease diagnosis and prognosis problems.

  11. Feature Selection for Chemical Sensor Arrays Using Mutual Information

    PubMed Central

    Wang, X. Rosalind; Lizier, Joseph T.; Nowotny, Thomas; Berna, Amalia Z.; Prokopenko, Mikhail; Trowell, Stephen C.

    2014-01-01

    We address the problem of feature selection for classifying a diverse set of chemicals using an array of metal oxide sensors. Our aim is to evaluate a filter approach to feature selection with reference to previous work, which used a wrapper approach on the same data set, and established best features and upper bounds on classification performance. We selected feature sets that exhibit the maximal mutual information with the identity of the chemicals. The selected features closely match those found to perform well in the previous study using a wrapper approach to conduct an exhaustive search of all permitted feature combinations. By comparing the classification performance of support vector machines (using features selected by mutual information) with the performance observed in the previous study, we found that while our approach does not always give the maximum possible classification performance, it always selects features that achieve classification performance approaching the optimum obtained by exhaustive search. We performed further classification using the selected feature set with some common classifiers and found that, for the selected features, Bayesian Networks gave the best performance. Finally, we compared the observed classification performances with the performance of classifiers using randomly selected features. We found that the selected features consistently outperformed randomly selected features for all tested classifiers. The mutual information filter approach is therefore a computationally efficient method for selecting near optimal features for chemical sensor arrays. PMID:24595058

  12. Classification of ASKAP Vast Radio Light Curves

    NASA Technical Reports Server (NTRS)

    Rebbapragada, Umaa; Lo, Kitty; Wagstaff, Kiri L.; Reed, Colorado; Murphy, Tara; Thompson, David R.

    2012-01-01

    The VAST survey is a wide-field survey that observes with unprecedented instrument sensitivity (0.5 mJy or lower) and repeat cadence (a goal of 5 seconds) that will enable novel scientific discoveries related to known and unknown classes of radio transients and variables. Given the unprecedented observing characteristics of VAST, it is important to estimate source classification performance, and determine best practices prior to the launch of ASKAP's BETA in 2012. The goal of this study is to identify light curve characterization and classification algorithms that are best suited for archival VAST light curve classification. We perform our experiments on light curve simulations of eight source types and achieve best case performance of approximately 90% accuracy. We note that classification performance is most influenced by light curve characterization rather than classifier algorithm.

  13. Land Cover Classification in a Complex Urban-Rural Landscape with Quickbird Imagery

    PubMed Central

    Moran, Emilio Federico.

    2010-01-01

    High spatial resolution images have been increasingly used for urban land use/cover classification, but the high spectral variation within the same land cover, the spectral confusion among different land covers, and the shadow problem often lead to poor classification performance based on the traditional per-pixel spectral-based classification methods. This paper explores approaches to improve urban land cover classification with Quickbird imagery. Traditional per-pixel spectral-based supervised classification, incorporation of textural images and multispectral images, spectral-spatial classifier, and segmentation-based classification are examined in a relatively new developing urban landscape, Lucas do Rio Verde in Mato Grosso State, Brazil. This research shows that use of spatial information during the image classification procedure, either through the integrated use of textural and spectral images or through the use of segmentation-based classification method, can significantly improve land cover classification performance. PMID:21643433

  14. Local linear discriminant analysis framework using sample neighbors.

    PubMed

    Fan, Zizhu; Xu, Yong; Zhang, David

    2011-07-01

    The linear discriminant analysis (LDA) is a very popular linear feature extraction approach. The algorithms of LDA usually perform well under the following two assumptions. The first assumption is that the global data structure is consistent with the local data structure. The second assumption is that the input data classes are Gaussian distributions. However, in real-world applications, these assumptions are not always satisfied. In this paper, we propose an improved LDA framework, the local LDA (LLDA), which can perform well without needing to satisfy the above two assumptions. Our LLDA framework can effectively capture the local structure of samples. According to different types of local data structure, our LLDA framework incorporates several different forms of linear feature extraction approaches, such as the classical LDA and principal component analysis. The proposed framework includes two LLDA algorithms: a vector-based LLDA algorithm and a matrix-based LLDA (MLLDA) algorithm. MLLDA is directly applicable to image recognition, such as face recognition. Our algorithms need to train only a small portion of the whole training set before testing a sample. They are suitable for learning large-scale databases especially when the input data dimensions are very high and can achieve high classification accuracy. Extensive experiments show that the proposed algorithms can obtain good classification results.

  15. Supervised DNA Barcodes species classification: analysis, comparisons and results

    PubMed Central

    2014-01-01

    Background Specific fragments, coming from short portions of DNA (e.g., mitochondrial, nuclear, and plastid sequences), have been defined as DNA Barcode and can be used as markers for organisms of the main life kingdoms. Species classification with DNA Barcode sequences has been proven effective on different organisms. Indeed, specific gene regions have been identified as Barcode: COI in animals, rbcL and matK in plants, and ITS in fungi. The classification problem assigns an unknown specimen to a known species by analyzing its Barcode. This task has to be supported with reliable methods and algorithms. Methods In this work the efficacy of supervised machine learning methods to classify species with DNA Barcode sequences is shown. The Weka software suite, which includes a collection of supervised classification methods, is adopted to address the task of DNA Barcode analysis. Classifier families are tested on synthetic and empirical datasets belonging to the animal, fungus, and plant kingdoms. In particular, the function-based method Support Vector Machines (SVM), the rule-based RIPPER, the decision tree C4.5, and the Naïve Bayes method are considered. Additionally, the classification results are compared with respect to ad-hoc and well-established DNA Barcode classification methods. Results A software that converts the DNA Barcode FASTA sequences to the Weka format is released, to adapt different input formats and to allow the execution of the classification procedure. The analysis of results on synthetic and real datasets shows that SVM and Naïve Bayes outperform on average the other considered classifiers, although they do not provide a human interpretable classification model. Rule-based methods have slightly inferior classification performances, but deliver the species specific positions and nucleotide assignments. On synthetic data the supervised machine learning methods obtain superior classification performances with respect to the traditional DNA Barcode classification methods. On empirical data their classification performances are at a comparable level to the other methods. Conclusions The classification analysis shows that supervised machine learning methods are promising candidates for handling with success the DNA Barcoding species classification problem, obtaining excellent performances. To conclude, a powerful tool to perform species identification is now available to the DNA Barcoding community. PMID:24721333

  16. An electronic nose for reliable measurement and correct classification of beverages.

    PubMed

    Mamat, Mazlina; Samad, Salina Abdul; Hannan, Mahammad A

    2011-01-01

    This paper reports the design of an electronic nose (E-nose) prototype for reliable measurement and correct classification of beverages. The prototype was developed and fabricated in the laboratory using commercially available metal oxide gas sensors and a temperature sensor. The repeatability, reproducibility and discriminative ability of the developed E-nose prototype were tested on odors emanating from different beverages such as blackcurrant juice, mango juice and orange juice, respectively. Repeated measurements of three beverages showed very high correlation (r > 0.97) between the same beverages to verify the repeatability. The prototype also produced highly correlated patterns (r > 0.97) in the measurement of beverages using different sensor batches to verify its reproducibility. The E-nose prototype also possessed good discriminative ability whereby it was able to produce different patterns for different beverages, different milk heat treatments (ultra high temperature, pasteurization) and fresh and spoiled milks. The discriminative ability of the E-nose was evaluated using Principal Component Analysis and a Multi Layer Perception Neural Network, with both methods showing good classification results.

  17. Long-term outcome in patients with germ cell tumours treated with POMB/ACE chemotherapy: comparison of commonly used classification systems of good and poor prognosis.

    PubMed Central

    Hitchins, R. N.; Newlands, E. S.; Smith, D. B.; Begent, R. H.; Rustin, G. J.; Bagshawe, K. D.

    1989-01-01

    We analysed outcome in 206 consecutive male patients treated for metastatic non-seminomatous germ cell tumour (NSGCT) of testicular or extragonadal origin treated with the POMB/ACE (cisplatin, vincristine, methotrexate, bleomycin, actinomycin D, cyclophosphamide, etoposide) regimen after division into prognostic groups by commonly used clinical classification systems and definitions of adverse prognosis. The adverse prognostic groups of all classification systems and definitions examined showed similar, but only moderate, sensitivity (71-81%) and specificity (52-56%) in predicting death. A simple definition of poor prognosis based on raised initial levels of serum tumour markers alpha fetoprotein (aFP) and human chorionic gonadotrophin (hCG) proved at least as useful (sensitivity 80%, specificity 55%) as other more complicated systems in predicting failure to achieve long-term survival. Comparison of survival between ultra-high dose cisplatin-based combination chemotherapy and patients treated with POMB/ACE shows no advantage from this more toxic approach. This suggests that good results in adverse prognosis patients can be achieved using conventional dose regimens administered intensively. PMID:2467682

  18. An Electronic Nose for Reliable Measurement and Correct Classification of Beverages

    PubMed Central

    Mamat, Mazlina; Samad, Salina Abdul; Hannan, Mahammad A.

    2011-01-01

    This paper reports the design of an electronic nose (E-nose) prototype for reliable measurement and correct classification of beverages. The prototype was developed and fabricated in the laboratory using commercially available metal oxide gas sensors and a temperature sensor. The repeatability, reproducibility and discriminative ability of the developed E-nose prototype were tested on odors emanating from different beverages such as blackcurrant juice, mango juice and orange juice, respectively. Repeated measurements of three beverages showed very high correlation (r > 0.97) between the same beverages to verify the repeatability. The prototype also produced highly correlated patterns (r > 0.97) in the measurement of beverages using different sensor batches to verify its reproducibility. The E-nose prototype also possessed good discriminative ability whereby it was able to produce different patterns for different beverages, different milk heat treatments (ultra high temperature, pasteurization) and fresh and spoiled milks. The discriminative ability of the E-nose was evaluated using Principal Component Analysis and a Multi Layer Perception Neural Network, with both methods showing good classification results. PMID:22163964

  19. Automatic adventitious respiratory sound analysis: A systematic review.

    PubMed

    Pramono, Renard Xaviero Adhi; Bowyer, Stuart; Rodriguez-Villegas, Esther

    2017-01-01

    Automatic detection or classification of adventitious sounds is useful to assist physicians in diagnosing or monitoring diseases such as asthma, Chronic Obstructive Pulmonary Disease (COPD), and pneumonia. While computerised respiratory sound analysis, specifically for the detection or classification of adventitious sounds, has recently been the focus of an increasing number of studies, a standardised approach and comparison has not been well established. To provide a review of existing algorithms for the detection or classification of adventitious respiratory sounds. This systematic review provides a complete summary of methods used in the literature to give a baseline for future works. A systematic review of English articles published between 1938 and 2016, searched using the Scopus (1938-2016) and IEEExplore (1984-2016) databases. Additional articles were further obtained by references listed in the articles found. Search terms included adventitious sound detection, adventitious sound classification, abnormal respiratory sound detection, abnormal respiratory sound classification, wheeze detection, wheeze classification, crackle detection, crackle classification, rhonchi detection, rhonchi classification, stridor detection, stridor classification, pleural rub detection, pleural rub classification, squawk detection, and squawk classification. Only articles were included that focused on adventitious sound detection or classification, based on respiratory sounds, with performance reported and sufficient information provided to be approximately repeated. Investigators extracted data about the adventitious sound type analysed, approach and level of analysis, instrumentation or data source, location of sensor, amount of data obtained, data management, features, methods, and performance achieved. A total of 77 reports from the literature were included in this review. 55 (71.43%) of the studies focused on wheeze, 40 (51.95%) on crackle, 9 (11.69%) on stridor, 9 (11.69%) on rhonchi, and 18 (23.38%) on other sounds such as pleural rub, squawk, as well as the pathology. Instrumentation used to collect data included microphones, stethoscopes, and accelerometers. Several references obtained data from online repositories or book audio CD companions. Detection or classification methods used varied from empirically determined thresholds to more complex machine learning techniques. Performance reported in the surveyed works were converted to accuracy measures for data synthesis. Direct comparison of the performance of surveyed works cannot be performed as the input data used by each was different. A standard validation method has not been established, resulting in different works using different methods and performance measure definitions. A review of the literature was performed to summarise different analysis approaches, features, and methods used for the analysis. The performance of recent studies showed a high agreement with conventional non-automatic identification. This suggests that automated adventitious sound detection or classification is a promising solution to overcome the limitations of conventional auscultation and to assist in the monitoring of relevant diseases.

  20. Automatic adventitious respiratory sound analysis: A systematic review

    PubMed Central

    Bowyer, Stuart; Rodriguez-Villegas, Esther

    2017-01-01

    Background Automatic detection or classification of adventitious sounds is useful to assist physicians in diagnosing or monitoring diseases such as asthma, Chronic Obstructive Pulmonary Disease (COPD), and pneumonia. While computerised respiratory sound analysis, specifically for the detection or classification of adventitious sounds, has recently been the focus of an increasing number of studies, a standardised approach and comparison has not been well established. Objective To provide a review of existing algorithms for the detection or classification of adventitious respiratory sounds. This systematic review provides a complete summary of methods used in the literature to give a baseline for future works. Data sources A systematic review of English articles published between 1938 and 2016, searched using the Scopus (1938-2016) and IEEExplore (1984-2016) databases. Additional articles were further obtained by references listed in the articles found. Search terms included adventitious sound detection, adventitious sound classification, abnormal respiratory sound detection, abnormal respiratory sound classification, wheeze detection, wheeze classification, crackle detection, crackle classification, rhonchi detection, rhonchi classification, stridor detection, stridor classification, pleural rub detection, pleural rub classification, squawk detection, and squawk classification. Study selection Only articles were included that focused on adventitious sound detection or classification, based on respiratory sounds, with performance reported and sufficient information provided to be approximately repeated. Data extraction Investigators extracted data about the adventitious sound type analysed, approach and level of analysis, instrumentation or data source, location of sensor, amount of data obtained, data management, features, methods, and performance achieved. Data synthesis A total of 77 reports from the literature were included in this review. 55 (71.43%) of the studies focused on wheeze, 40 (51.95%) on crackle, 9 (11.69%) on stridor, 9 (11.69%) on rhonchi, and 18 (23.38%) on other sounds such as pleural rub, squawk, as well as the pathology. Instrumentation used to collect data included microphones, stethoscopes, and accelerometers. Several references obtained data from online repositories or book audio CD companions. Detection or classification methods used varied from empirically determined thresholds to more complex machine learning techniques. Performance reported in the surveyed works were converted to accuracy measures for data synthesis. Limitations Direct comparison of the performance of surveyed works cannot be performed as the input data used by each was different. A standard validation method has not been established, resulting in different works using different methods and performance measure definitions. Conclusion A review of the literature was performed to summarise different analysis approaches, features, and methods used for the analysis. The performance of recent studies showed a high agreement with conventional non-automatic identification. This suggests that automated adventitious sound detection or classification is a promising solution to overcome the limitations of conventional auscultation and to assist in the monitoring of relevant diseases. PMID:28552969

  1. Morphology classification of galaxies in CL 0939+4713 using a ground-based telescope image

    NASA Technical Reports Server (NTRS)

    Fukugita, M.; Doi, M.; Dressler, A.; Gunn, J. E.

    1995-01-01

    Morphological classification is studied for galaxies in cluster CL 0939+4712 at z = 0.407 using simple photometric parameters obtained from a ground-based telescope image with seeing of 1-2 arcseconds full width at half maximim (FWHM). By ploting the galaxies in a plane of the concentration parameter versus mean surface brightness, we find a good correlation between the location on the plane and galaxy colors, which are known to correlate with morphological types from a recent Hubble Space Telescope (HST) study. Using the present method, we expect a success rate of classification into early and late types of about 70% or possibly more.

  2. Driver behavior profiling: An investigation with different smartphone sensors and machine learning

    PubMed Central

    Ferreira, Jair; Carvalho, Eduardo; Ferreira, Bruno V.; de Souza, Cleidson; Suhara, Yoshihiko; Pentland, Alex

    2017-01-01

    Driver behavior impacts traffic safety, fuel/energy consumption and gas emissions. Driver behavior profiling tries to understand and positively impact driver behavior. Usually driver behavior profiling tasks involve automated collection of driving data and application of computer models to generate a classification that characterizes the driver aggressiveness profile. Different sensors and classification methods have been employed in this task, however, low-cost solutions and high performance are still research targets. This paper presents an investigation with different Android smartphone sensors, and classification algorithms in order to assess which sensor/method assembly enables classification with higher performance. The results show that specific combinations of sensors and intelligent methods allow classification performance improvement. PMID:28394925

  3. [Proposals for social class classification based on the Spanish National Classification of Occupations 2011 using neo-Weberian and neo-Marxist approaches].

    PubMed

    Domingo-Salvany, Antònia; Bacigalupe, Amaia; Carrasco, José Miguel; Espelt, Albert; Ferrando, Josep; Borrell, Carme

    2013-01-01

    In Spain, the new National Classification of Occupations (Clasificación Nacional de Ocupaciones [CNO-2011]) is substantially different to the 1994 edition, and requires adaptation of occupational social classes for use in studies of health inequalities. This article presents two proposals to measure social class: the new classification of occupational social class (CSO-SEE12), based on the CNO-2011 and a neo-Weberian perspective, and a social class classification based on a neo-Marxist approach. The CSO-SEE12 is the result of a detailed review of the CNO-2011 codes. In contrast, the neo-Marxist classification is derived from variables related to capital and organizational and skill assets. The proposed CSO-SEE12 consists of seven classes that can be grouped into a smaller number of categories according to study needs. The neo-Marxist classification consists of 12 categories in which home owners are divided into three categories based on capital goods and employed persons are grouped into nine categories composed of organizational and skill assets. These proposals are complemented by a proposed classification of educational level that integrates the various curricula in Spain and provides correspondences with the International Standard Classification of Education. Copyright © 2012 SESPAS. Published by Elsevier Espana. All rights reserved.

  4. Arrhythmia Evaluation in Wearable ECG Devices

    PubMed Central

    Sadrawi, Muammar; Lin, Chien-Hung; Hsieh, Yita; Kuo, Chia-Chun; Chien, Jen Chien; Haraikawa, Koichi; Abbod, Maysam F.; Shieh, Jiann-Shing

    2017-01-01

    This study evaluates four databases from PhysioNet: The American Heart Association database (AHADB), Creighton University Ventricular Tachyarrhythmia database (CUDB), MIT-BIH Arrhythmia database (MITDB), and MIT-BIH Noise Stress Test database (NSTDB). The ANSI/AAMI EC57:2012 is used for the evaluation of the algorithms for the supraventricular ectopic beat (SVEB), ventricular ectopic beat (VEB), atrial fibrillation (AF), and ventricular fibrillation (VF) via the evaluation of the sensitivity, positive predictivity and false positive rate. Sample entropy, fast Fourier transform (FFT), and multilayer perceptron neural network with backpropagation training algorithm are selected for the integrated detection algorithms. For this study, the result for SVEB has some improvements compared to a previous study that also utilized ANSI/AAMI EC57. In further, VEB sensitivity and positive predictivity gross evaluations have greater than 80%, except for the positive predictivity of the NSTDB database. For AF gross evaluation of MITDB database, the results show very good classification, excluding the episode sensitivity. In advanced, for VF gross evaluation, the episode sensitivity and positive predictivity for the AHADB, MITDB, and CUDB, have greater than 80%, except for MITDB episode positive predictivity, which is 75%. The achieved results show that the proposed integrated SVEB, VEB, AF, and VF detection algorithm has an accurate classification according to ANSI/AAMI EC57:2012. In conclusion, the proposed integrated detection algorithm can achieve good accuracy in comparison with other previous studies. Furthermore, more advanced algorithms and hardware devices should be performed in future for arrhythmia detection and evaluation. PMID:29068369

  5. [Telemedicine correlation in retinopathy of prematurity between experts and non-expert observers].

    PubMed

    Ossandón, D; Zanolli, M; López, J P; Stevenson, R; Agurto, R; Cartes, C

    2015-01-01

    To study the correlation between expert and non-expert observers in the reporting images for the diagnosis of retinopathy of prematurity (ROP) in a telemedicine setting. A cross-sectional, multicenter study, consisting of 25 sets of images of patients screened for ROP. They were evaluated by two experts in ROP and 1 non-expert and classified according to telemedicine classification, zone, stage, plus disease and Ells referral criteria. The telemedicine classification was: no ROP, mild ROP, type 2 ROP, or ROP that requires treatment. Ells referral criteria is defined as the presence at least one of the following: ROP in zone I, Stage 3 in zone I or II, or plus+ For statistical analysis, SPSS 16.0 was used. For correlation, Kappa value was performed. There was a high correlation between observers for the assessment of ROP stage (0.75; 0.54-0.88) plus disease (0.85; 0.71-0.92), and Ells criteria (0.89; 0.83-1.0). However, inter-observer values were low for zone (0.41; 0.27-0.54) and telemedicine classification (0.43; 0.33-0.6). When evaluating telemedicine images by examiners with different levels of expertise in ROP, the Ells criteria gave the best correlation. In addition, stage of disease and plus disease have good correlation among observers. In contrast, the correlation between observers was low for zone and telemedicine classification. Copyright © 2014 Sociedad Española de Oftalmología. Published by Elsevier España, S.L.U. All rights reserved.

  6. Which sociodemographic factors are important on smoking behaviour of high school students? The contribution of classification and regression tree methodology in a broad epidemiological survey.

    PubMed

    Ozge, C; Toros, F; Bayramkaya, E; Camdeviren, H; Sasmaz, T

    2006-08-01

    The purpose of this study is to evaluate the most important sociodemographic factors on smoking status of high school students using a broad randomised epidemiological survey. Using in-class, self administered questionnaire about their sociodemographic variables and smoking behaviour, a representative sample of total 3304 students of preparatory, 9th, 10th, and 11th grades, from 22 randomly selected schools of Mersin, were evaluated and discriminative factors have been determined using appropriate statistics. In addition to binary logistic regression analysis, the study evaluated combined effects of these factors using classification and regression tree methodology, as a new statistical method. The data showed that 38% of the students reported lifetime smoking and 16.9% of them reported current smoking with a male predominancy and increasing prevalence by age. Second hand smoking was reported at a 74.3% frequency with father predominance (56.6%). The significantly important factors that affect current smoking in these age groups were increased by household size, late birth rank, certain school types, low academic performance, increased second hand smoking, and stress (especially reported as separation from a close friend or because of violence at home). Classification and regression tree methodology showed the importance of some neglected sociodemographic factors with a good classification capacity. It was concluded that, as closely related with sociocultural factors, smoking was a common problem in this young population, generating important academic and social burden in youth life and with increasing data about this behaviour and using new statistical methods, effective coping strategies could be composed.

  7. Physicochemical properties of metal-doped activated carbons and relationship with their performance in the removal of SO2 and NO.

    PubMed

    Gao, Xiang; Liu, Shaojun; Zhang, Yang; Luo, Zhongyang; Cen, Kefa

    2011-04-15

    Several metal-doped activated carbons (Fe, Co, Ni, V, Mn, Cu and Ce) were prepared and characterized. The results of N(2) adsorption-desorption, X-ray diffraction, and X-ray photoelectron spectroscopy indicated that some metals (Cu and Fe) were partly reduced by carbon during preparation. Activity tests for the removal of SO(2) and the selective catalytic reduction of NO with ammonia were carried out. Due to different physicochemical properties, different pathways for the SO(2) removal had been put out, i.e., catalytic oxidation, direct reaction and adsorption. This classification depended on the standard reduction potentials of metal redox pairs. Samples impregnated with V, Ce and Cu showed good activity for NO reduction by NH(3), which was also ascribed to the reduction potential values of metal redox pairs. Ce seemed to be a promising alternative to V due to the higher activity in NO reduction and the nontoxic property. A metal cation which could easily convert between the two valences seemed to be crucial to the good performance of both SO(2) and NO removal, just like V and Cu. Copyright © 2011 Elsevier B.V. All rights reserved.

  8. Classification Techniques for Digital Map Compression

    DTIC Science & Technology

    1989-03-01

    classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the

  9. Predicting discharge mortality after acute ischemic stroke using balanced data.

    PubMed

    Ho, King Chung; Speier, William; El-Saden, Suzie; Liebeskind, David S; Saver, Jeffery L; Bui, Alex A T; Arnold, Corey W

    2014-01-01

    Several models have been developed to predict stroke outcomes (e.g., stroke mortality, patient dependence, etc.) in recent decades. However, there is little discussion regarding the problem of between-class imbalance in stroke datasets, which leads to prediction bias and decreased performance. In this paper, we demonstrate the use of the Synthetic Minority Over-sampling Technique to overcome such problems. We also compare state of the art machine learning methods and construct a six-variable support vector machine (SVM) model to predict stroke mortality at discharge. Finally, we discuss how the identification of a reduced feature set allowed us to identify additional cases in our research database for validation testing. Our classifier achieved a c-statistic of 0.865 on the cross-validated dataset, demonstrating good classification performance using a reduced set of variables.

  10. Interobserver agreement in CTG interpretation using the 2015 FIGO guidelines for intrapartum fetal monitoring.

    PubMed

    Rei, Mariana; Tavares, Sara; Pinto, Pedro; Machado, Ana P; Monteiro, Sofia; Costa, Antónia; Costa-Santos, Cristina; Bernardes, João; Ayres-De-Campos, Diogo

    2016-10-01

    Visual analysis of cardiotocographic (CTG) tracings has been shown to be prone to poor intra- and interobserver agreement when several interpretation guidelines are used, and this may have an important impact on the technology's performance. The aim of this study was to evaluate agreement in CTG interpretation using the new 2015 FIGO guidelines on intrapartum fetal monitoring. A pre-existing database of intrapartum CTG tracings was used to sequentially select 151 cases acquired with a fetal electrode, with duration exceeding 60minutes, and signal loss less than 15%. These tracings were presented to six clinicians, three with more than 5 years' experience in the labor ward, and three with 5 or less years' experience. Observers were asked to evaluate tracings independently, to assess basic CTG features: baseline, variability, accelerations, decelerations, sinusoidal pattern, tachysystole, and to classify each tracing as normal, suspicious or pathologic, according to the 2015 FIGO guidelines on intrapartum fetal monitoring. Agreement between observers was evaluated using the proportions of agreement (Pa), with 95% confidence intervals (95%CI). A good interobserver agreement was found in the evaluation of most CTG features, but not bradycardia, reduced variability, saltatory pattern, absence of accelerations and absence of decelerations. For baseline classification Pa was 0.85 [0.82-0.90], for variability 0.82 [0.78-0.85], for accelerations 0.72 [0.68-0.75], for tachysystole 0.77 [0.74-0.81], for decelerations 0.92 [0.90-0.95], for variable decelerations 0.62 [0.58-0.65], for late decelerations 0.63 [0.59-0.66], for repetitive decelerations 0.73 [0.69-0.78], and for prolonged decelerations 0.81 [0.77-0.85]. For overall CTG classification, Pa were 0.60 [0.56-0.64], for classification as normal 0.67 [0.61-0.72], for suspicious 0.54 [0.48-0.60] and for pathologic 0.59 [0.51-0.66]. No differences in agreement according to the level of expertise were observed, except in the identification of accelerations, where it was better in the more experienced group. A good interobserver agreement was found in evaluation of most CTG features and in overall tracing classification. Results were better than those reported in previous studies evaluating agreement in overall tracing classification. Observer experience did not appear to play a role in agreement. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. A Novel Hybrid Classification Model of Genetic Algorithms, Modified k-Nearest Neighbor and Developed Backpropagation Neural Network

    PubMed Central

    Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy

    2014-01-01

    Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the proposed model in terms of classification accuracy is desirable, promising, and competitive to the existing state-of-the-art classification models. PMID:25419659

  12. Prediction of Gestational Diabetes through NMR Metabolomics of Maternal Blood.

    PubMed

    Pinto, Joana; Almeida, Lara M; Martins, Ana S; Duarte, Daniela; Barros, António S; Galhano, Eulália; Pita, Cristina; Almeida, Maria do Céu; Carreira, Isabel M; Gil, Ana M

    2015-06-05

    Metabolic biomarkers of pre- and postdiagnosis gestational diabetes mellitus (GDM) were sought, using nuclear magnetic resonance (NMR) metabolomics of maternal plasma and corresponding lipid extracts. Metabolite differences between controls and disease were identified through multivariate analysis of variable selected (1)H NMR spectra. For postdiagnosis GDM, partial least squares regression identified metabolites with higher dependence on normal gestational age evolution. Variable selection of NMR spectra produced good classification models for both pre- and postdiagnostic GDM. Prediagnosis GDM was accompanied by cholesterol increase and minor increases in lipoproteins (plasma), fatty acids, and triglycerides (extracts). Small metabolite changes comprised variations in glucose (up regulated), amino acids, betaine, urea, creatine, and metabolites related to gut microflora. Most changes were enhanced upon GDM diagnosis, in addition to newly observed changes in low-Mw compounds. GDM prediction seems possible exploiting multivariate profile changes rather than a set of univariate changes. Postdiagnosis GDM is successfully classified using a 26-resonance plasma biomarker. Plasma and extracts display comparable classification performance, the former enabling direct and more rapid analysis. Results and putative biochemical hypotheses require further confirmation in larger cohorts of distinct ethnicities.

  13. Sentiments Analysis of Reviews Based on ARCNN Model

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoyu; Xu, Ming; Xu, Jian; Zheng, Ning; Yang, Tao

    2017-10-01

    The sentiments analysis of product reviews is designed to help customers understand the status of the product. The traditional method of sentiments analysis relies on the input of a fixed feature vector which is performance bottleneck of the basic codec architecture. In this paper, we propose an attention mechanism with BRNN-CNN model, referring to as ARCNN model. In order to have a good analysis of the semantic relations between words and solves the problem of dimension disaster, we use the GloVe algorithm to train the vector representations for words. Then, ARCNN model is proposed to deal with the problem of deep features training. Specifically, BRNN model is proposed to investigate non-fixed-length vectors and keep time series information perfectly and CNN can study more connection of deep semantic links. Moreover, the attention mechanism can automatically learn from the data and optimize the allocation of weights. Finally, a softmax classifier is designed to complete the sentiment classification of reviews. Experiments show that the proposed method can improve the accuracy of sentiment classification compared with benchmark methods.

  14. Assessing the varietal origin of extra-virgin olive oil using liquid chromatography fingerprints of phenolic compound, data fusion and chemometrics.

    PubMed

    Bajoub, Aadil; Medina-Rodríguez, Santiago; Gómez-Romero, María; Ajal, El Amine; Bagur-González, María Gracia; Fernández-Gutiérrez, Alberto; Carrasco-Pancorbo, Alegría

    2017-01-15

    High Performance Liquid Chromatography (HPLC) with diode array (DAD) and fluorescence (FLD) detection was used to acquire the fingerprints of the phenolic fraction of monovarietal extra-virgin olive oils (extra-VOOs) collected over three consecutive crop seasons (2011/2012-2013/2014). The chromatographic fingerprints of 140 extra-VOO samples processed from olive fruits of seven olive varieties, were recorded and statistically treated for varietal authentication purposes. First, DAD and FLD chromatographic-fingerprint datasets were separately processed and, subsequently, were joined using "Low-level" and "Mid-Level" data fusion methods. After the preliminary examination by principal component analysis (PCA), three supervised pattern recognition techniques, Partial Least Squares Discriminant Analysis (PLS-DA), Soft Independent Modeling of Class Analogies (SIMCA) and K-Nearest Neighbors (k-NN) were applied to the four chromatographic-fingerprinting matrices. The classification models built were very sensitive and selective, showing considerably good recognition and prediction abilities. The combination "chromatographic dataset+chemometric technique" allowing the most accurate classification for each monovarietal extra-VOO was highlighted. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Discrimination of lymphoma using laser-induced breakdown spectroscopy conducted on whole blood samples

    PubMed Central

    Chen, Xue; Li, Xiaohui; Yang, Sibo; Yu, Xin; Liu, Aichun

    2018-01-01

    Lymphoma is a significant cancer that affects the human lymphatic and hematopoietic systems. In this work, discrimination of lymphoma using laser-induced breakdown spectroscopy (LIBS) conducted on whole blood samples is presented. The whole blood samples collected from lymphoma patients and healthy controls are deposited onto standard quantitative filter papers and ablated with a 1064 nm Q-switched Nd:YAG laser. 16 atomic and ionic emission lines of calcium (Ca), iron (Fe), magnesium (Mg), potassium (K) and sodium (Na) are selected to discriminate the cancer disease. Chemometric methods, including principal component analysis (PCA), linear discriminant analysis (LDA) classification, and k nearest neighbor (kNN) classification are used to build the discrimination models. Both LDA and kNN models have achieved very good discrimination performances for lymphoma, with an accuracy of over 99.7%, a sensitivity of over 0.996, and a specificity of over 0.997. These results demonstrate that the whole-blood-based LIBS technique in combination with chemometric methods can serve as a fast, less invasive, and accurate method for detection and discrimination of human malignancies. PMID:29541503

  16. 21 CFR 886.1400 - Maddox lens.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... intended to be handheld or placed in a trial frame to evaluate eye muscle dysfunction. (b) Classification... the current good manufacturing practice requirements of the quality system regulation in part 820 of...

  17. 21 CFR 886.1400 - Maddox lens.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... intended to be handheld or placed in a trial frame to evaluate eye muscle dysfunction. (b) Classification... the current good manufacturing practice requirements of the quality system regulation in part 820 of...

  18. 21 CFR 886.1400 - Maddox lens.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... intended to be handheld or placed in a trial frame to evaluate eye muscle dysfunction. (b) Classification... the current good manufacturing practice requirements of the quality system regulation in part 820 of...

  19. 21 CFR 886.1400 - Maddox lens.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... intended to be handheld or placed in a trial frame to evaluate eye muscle dysfunction. (b) Classification... the current good manufacturing practice requirements of the quality system regulation in part 820 of...

  20. 21 CFR 886.1400 - Maddox lens.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... intended to be handheld or placed in a trial frame to evaluate eye muscle dysfunction. (b) Classification... the current good manufacturing practice requirements of the quality system regulation in part 820 of...

  1. Relationship between Functional Classification Levels and Anaerobic Performance of Wheelchair Basketball Athletes

    ERIC Educational Resources Information Center

    Molik, Bartosz; Laskin, James J.; Kosmol, Andrzej; Skucas, Kestas; Bida, Urszula

    2010-01-01

    Wheelchair basketball athletes are classified using the International Wheelchair Basketball Federation (IWBF) functional classification system. The purpose of this study was to evaluate the relationship between upper extremity anaerobic performance (AnP) and all functional classification levels in wheelchair basketball. Ninety-seven male athletes…

  2. Training sample selection based on self-training for liver cirrhosis classification using ultrasound images

    NASA Astrophysics Data System (ADS)

    Fujita, Yusuke; Mitani, Yoshihiro; Hamamoto, Yoshihiko; Segawa, Makoto; Terai, Shuji; Sakaida, Isao

    2017-03-01

    Ultrasound imaging is a popular and non-invasive tool used in the diagnoses of liver disease. Cirrhosis is a chronic liver disease and it can advance to liver cancer. Early detection and appropriate treatment are crucial to prevent liver cancer. However, ultrasound image analysis is very challenging, because of the low signal-to-noise ratio of ultrasound images. To achieve the higher classification performance, selection of training regions of interest (ROIs) is very important that effect to classification accuracy. The purpose of our study is cirrhosis detection with high accuracy using liver ultrasound images. In our previous works, training ROI selection by MILBoost and multiple-ROI classification based on the product rule had been proposed, to achieve high classification performance. In this article, we propose self-training method to select training ROIs effectively. Evaluation experiments were performed to evaluate effect of self-training, using manually selected ROIs and also automatically selected ROIs. Experimental results show that self-training for manually selected ROIs achieved higher classification performance than other approaches, including our conventional methods. The manually ROI definition and sample selection are important to improve classification accuracy in cirrhosis detection using ultrasound images.

  3. Use of Log-Linear Models in Classification Problems.

    DTIC Science & Technology

    1981-12-01

    polynomials. The second example involves infant hypoxic trauma, and many cells are empty. The existence conditions are used to find a model for which esti...mates of cell frequencies exist and are in good agreement with the ob- served data. Key Words: Classification problem, log-difference models, minimum 8...variates define k states, which are labeled consecutively. Thus, while MB define cells in their tables by an I-vector Z, we simply take Z to be a

  4. Low-Power Analog Processing for Sensing Applications: Low-Frequency Harmonic Signal Classification

    PubMed Central

    White, Daniel J.; William, Peter E.; Hoffman, Michael W.; Balkir, Sina

    2013-01-01

    A low-power analog sensor front-end is described that reduces the energy required to extract environmental sensing spectral features without using Fast Fouriér Transform (FFT) or wavelet transforms. An Analog Harmonic Transform (AHT) allows selection of only the features needed by the back-end, in contrast to the FFT, where all coefficients must be calculated simultaneously. We also show that the FFT coefficients can be easily calculated from the AHT results by a simple back-substitution. The scheme is tailored for low-power, parallel analog implementation in an integrated circuit (IC). Two different applications are tested with an ideal front-end model and compared to existing studies with the same data sets. Results from the military vehicle classification and identification of machine-bearing fault applications shows that the front-end suits a wide range of harmonic signal sources. Analog-related errors are modeled to evaluate the feasibility of and to set design parameters for an IC implementation to maintain good system-level performance. Design of a preliminary transistor-level integrator circuit in a 0.13 μm complementary metal-oxide-silicon (CMOS) integrated circuit process showed the ability to use online self-calibration to reduce fabrication errors to a sufficiently low level. Estimated power dissipation is about three orders of magnitude less than similar vehicle classification systems that use commercially available FFT spectral extraction. PMID:23892765

  5. Efficient feature selection using a hybrid algorithm for the task of epileptic seizure detection

    NASA Astrophysics Data System (ADS)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2014-07-01

    Feature selection is a very important aspect in the field of machine learning. It entails the search of an optimal subset from a very large data set with high dimensional feature space. Apart from eliminating redundant features and reducing computational cost, a good selection of feature also leads to higher prediction and classification accuracy. In this paper, an efficient feature selection technique is introduced in the task of epileptic seizure detection. The raw data are electroencephalography (EEG) signals. Using discrete wavelet transform, the biomedical signals were decomposed into several sets of wavelet coefficients. To reduce the dimension of these wavelet coefficients, a feature selection method that combines the strength of both filter and wrapper methods is proposed. Principal component analysis (PCA) is used as part of the filter method. As for wrapper method, the evolutionary harmony search (HS) algorithm is employed. This metaheuristic method aims at finding the best discriminating set of features from the original data. The obtained features were then used as input for an automated classifier, namely wavelet neural networks (WNNs). The WNNs model was trained to perform a binary classification task, that is, to determine whether a given EEG signal was normal or epileptic. For comparison purposes, different sets of features were also used as input. Simulation results showed that the WNNs that used the features chosen by the hybrid algorithm achieved the highest overall classification accuracy.

  6. The use of δ(2)H and δ(18)O isotopic analyses combined with chemometrics as a traceability tool for the geographical origin of bell peppers.

    PubMed

    de Rijke, E; Schoorl, J C; Cerli, C; Vonhof, H B; Verdegaal, S J A; Vivó-Truyols, G; Lopatka, M; Dekter, R; Bakker, D; Sjerps, M J; Ebskamp, M; de Koster, C G

    2016-08-01

    Two approaches were investigated to discriminate between bell peppers of different geographic origins. Firstly, δ(18)O fruit water and corresponding source water were analyzed and correlated to the regional GNIP (Global Network of Isotopes in Precipitation) values. The water and GNIP data showed good correlation with the pepper data, with constant isotope fractionation of about -4. Secondly, compound-specific stable hydrogen isotope data was used for classification. Using n-alkane fingerprinting data, both linear discriminant analysis (LDA) and a likelihood-based classification, using the kernel-density smoothed data, were developed to discriminate between peppers from different origins. Both methods were evaluated using the δ(2)H values and n-alkanes relative composition as variables. Misclassification rates were calculated using a Monte-Carlo 5-fold cross-validation procedure. Comparable overall classification performance was achieved, however, the two methods showed sensitivity to different samples. The combined values of δ(2)H IRMS, and complimentary information regarding the relative abundance of four main alkanes in bell pepper fruit water, has proven effective for geographic origin discrimination. Evaluation of the rarity of observing particular ranges for these characteristics could be used to make quantitative assertions regarding geographic origin of bell peppers and, therefore, have a role in verifying compliance with labeling of geographical origin. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Vertical and Horizontal Jump Capacity in International Cerebral Palsy Football Players.

    PubMed

    Reina, Raúl; Iturricastillo, Aitor; Sabido, Rafael; Campayo-Piernas, Maria; Yanci, Javier

    2018-05-01

    To evaluate the reliability and validity of vertical and horizontal jump tests in football players with cerebral palsy (FPCP) and to analyze the jump performance differences between current International Federation for Cerebral Palsy Football functional classes (ie, FT5-FT8). A total of 132 international parafootballers (25.8 [6.7] y; 70.0 [9.1] kg; 175.7 [7.3] cm; 22.8 [2.8] kg·m -2 ; and 10.7 [7.5] y training experience) participated in the study. The participants were classified according to the International Federation for Cerebral Palsy Football classification rules, and a group of 39 players without cerebral palsy was included in the study as a control group. Football players' vertical and horizontal jump performance was assessed. All the tests showed good to excellent relative intrasession reliability scores, both in FPCP and in the control group (intraclass correlation = .78-.97, SEM < 10.5%). Significant between-groups differences (P < .001) were obtained in the countermovement jump, standing broad jump, 4 bounds for distance, and triple hop for distance dominant leg and nondominant leg. The control group performed higher/farther jumps with regard to all the FPCP classes, obtaining significant differences and moderate to large effect sizes (ESs) (.85 < ES < 5.54, P < .01). Players in FT8 class (less severe impairments) had significantly higher scores in all the jump tests than players in the lower classes (ES = moderate to large, P < .01). The vertical and horizontal jump tests performed in this study could be applied to the classification procedures and protocols for FPCP.

  8. Classifier Subset Selection for the Stacked Generalization Method Applied to Emotion Recognition in Speech

    PubMed Central

    Álvarez, Aitor; Sierra, Basilio; Arruti, Andoni; López-Gil, Juan-Miguel; Garay-Vitoria, Nestor

    2015-01-01

    In this paper, a new supervised classification paradigm, called classifier subset selection for stacked generalization (CSS stacking), is presented to deal with speech emotion recognition. The new approach consists of an improvement of a bi-level multi-classifier system known as stacking generalization by means of an integration of an estimation of distribution algorithm (EDA) in the first layer to select the optimal subset from the standard base classifiers. The good performance of the proposed new paradigm was demonstrated over different configurations and datasets. First, several CSS stacking classifiers were constructed on the RekEmozio dataset, using some specific standard base classifiers and a total of 123 spectral, quality and prosodic features computed using in-house feature extraction algorithms. These initial CSS stacking classifiers were compared to other multi-classifier systems and the employed standard classifiers built on the same set of speech features. Then, new CSS stacking classifiers were built on RekEmozio using a different set of both acoustic parameters (extended version of the Geneva Minimalistic Acoustic Parameter Set (eGeMAPS)) and standard classifiers and employing the best meta-classifier of the initial experiments. The performance of these two CSS stacking classifiers was evaluated and compared. Finally, the new paradigm was tested on the well-known Berlin Emotional Speech database. We compared the performance of single, standard stacking and CSS stacking systems using the same parametrization of the second phase. All of the classifications were performed at the categorical level, including the six primary emotions plus the neutral one. PMID:26712757

  9. The Neuropsychology of Male Adults With High-Functioning Autism or Asperger Syndrome†

    PubMed Central

    Wilson, C Ellie; Happé, Francesca; Wheelwright, Sally J; Ecker, Christine; Lombardo, Michael V; Johnston, Patrick; Daly, Eileen; Murphy, Clodagh M; Spain, Debbie; Lai, Meng-Chuan; Chakrabarti, Bhismadev; Sauter, Disa A; Baron-Cohen, Simon; Murphy, Declan G M

    2014-01-01

    Autism Spectrum Disorder (ASD) is diagnosed on the basis of behavioral symptoms, but cognitive abilities may also be useful in characterizing individuals with ASD. One hundred seventy-eight high-functioning male adults, half with ASD and half without, completed tasks assessing IQ, a broad range of cognitive skills, and autistic and comorbid symptomatology. The aims of the study were, first, to determine whether significant differences existed between cases and controls on cognitive tasks, and whether cognitive profiles, derived using a multivariate classification method with data from multiple cognitive tasks, could distinguish between the two groups. Second, to establish whether cognitive skill level was correlated with degree of autistic symptom severity, and third, whether cognitive skill level was correlated with degree of comorbid psychopathology. Fourth, cognitive characteristics of individuals with Asperger Syndrome (AS) and high-functioning autism (HFA) were compared. After controlling for IQ, ASD and control groups scored significantly differently on tasks of social cognition, motor performance, and executive function (P's < 0.05). To investigate cognitive profiles, 12 variables were entered into a support vector machine (SVM), which achieved good classification accuracy (81%) at a level significantly better than chance (P < 0.0001). After correcting for multiple correlations, there were no significant associations between cognitive performance and severity of either autistic or comorbid symptomatology. There were no significant differences between AS and HFA groups on the cognitive tasks. Cognitive classification models could be a useful aid to the diagnostic process when used in conjunction with other data sources—including clinical history. Autism Res 2014, 7: 568–581. © 2014 International Society for Autism Research, Wiley Periodicals, Inc. PMID:24903974

  10. Gene selection for cancer classification with the help of bees.

    PubMed

    Moosa, Johra Muhammad; Shakur, Rameen; Kaykobad, Mohammad; Rahman, Mohammad Sohel

    2016-08-10

    Development of biologically relevant models from gene expression data notably, microarray data has become a topic of great interest in the field of bioinformatics and clinical genetics and oncology. Only a small number of gene expression data compared to the total number of genes explored possess a significant correlation with a certain phenotype. Gene selection enables researchers to obtain substantial insight into the genetic nature of the disease and the mechanisms responsible for it. Besides improvement of the performance of cancer classification, it can also cut down the time and cost of medical diagnoses. This study presents a modified Artificial Bee Colony Algorithm (ABC) to select minimum number of genes that are deemed to be significant for cancer along with improvement of predictive accuracy. The search equation of ABC is believed to be good at exploration but poor at exploitation. To overcome this limitation we have modified the ABC algorithm by incorporating the concept of pheromones which is one of the major components of Ant Colony Optimization (ACO) algorithm and a new operation in which successive bees communicate to share their findings. The proposed algorithm is evaluated using a suite of ten publicly available datasets after the parameters are tuned scientifically with one of the datasets. Obtained results are compared to other works that used the same datasets. The performance of the proposed method is proved to be superior. The method presented in this paper can provide subset of genes leading to more accurate classification results while the number of selected genes is smaller. Additionally, the proposed modified Artificial Bee Colony Algorithm could conceivably be applied to problems in other areas as well.

  11. A novel latent gaussian copula framework for modeling spatial correlation in quantized SAR imagery with applications to ATR

    NASA Astrophysics Data System (ADS)

    Thelen, Brian T.; Xique, Ismael J.; Burns, Joseph W.; Goley, G. Steven; Nolan, Adam R.; Benson, Jonathan W.

    2017-04-01

    With all of the new remote sensing modalities available, and with ever increasing capabilities and frequency of collection, there is a desire to fundamentally understand/quantify the information content in the collected image data relative to various exploitation goals, such as detection/classification. A fundamental approach for this is the framework of Bayesian decision theory, but a daunting challenge is to have significantly flexible and accurate multivariate models for the features and/or pixels that capture a wide assortment of distributions and dependen- cies. In addition, data can come in the form of both continuous and discrete representations, where the latter is often generated based on considerations of robustness to imaging conditions and occlusions/degradations. In this paper we propose a novel suite of "latent" models fundamentally based on multivariate Gaussian copula models that can be used for quantized data from SAR imagery. For this Latent Gaussian Copula (LGC) model, we derive an approximate, maximum-likelihood estimation algorithm and demonstrate very reasonable estimation performance even for the larger images with many pixels. However applying these LGC models to large dimen- sions/images within a Bayesian decision/classification theory is infeasible due to the computational/numerical issues in evaluating the true full likelihood, and we propose an alternative class of novel pseudo-likelihoood detection statistics that are computationally feasible. We show in a few simple examples that these statistics have the potential to provide very good and robust detection/classification performance. All of this framework is demonstrated on a simulated SLICY data set, and the results show the importance of modeling the dependencies, and of utilizing the pseudo-likelihood methods.

  12. Activity recognition using a single accelerometer placed at the wrist or ankle.

    PubMed

    Mannini, Andrea; Intille, Stephen S; Rosenberger, Mary; Sabatini, Angelo M; Haskell, William

    2013-11-01

    Large physical activity surveillance projects such as the UK Biobank and NHANES are using wrist-worn accelerometer-based activity monitors that collect raw data. The goal is to increase wear time by asking subjects to wear the monitors on the wrist instead of the hip, and then to use information in the raw signal to improve activity type and intensity estimation. The purposes of this work was to obtain an algorithm to process wrist and ankle raw data and to classify behavior into four broad activity classes: ambulation, cycling, sedentary, and other activities. Participants (N = 33) wearing accelerometers on the wrist and ankle performed 26 daily activities. The accelerometer data were collected, cleaned, and preprocessed to extract features that characterize 2-, 4-, and 12.8-s data windows. Feature vectors encoding information about frequency and intensity of motion extracted from analysis of the raw signal were used with a support vector machine classifier to identify a subject's activity. Results were compared with categories classified by a human observer. Algorithms were validated using a leave-one-subject-out strategy. The computational complexity of each processing step was also evaluated. With 12.8-s windows, the proposed strategy showed high classification accuracies for ankle data (95.0%) that decreased to 84.7% for wrist data. Shorter (4 s) windows only minimally decreased performances of the algorithm on the wrist to 84.2%. A classification algorithm using 13 features shows good classification into the four classes given the complexity of the activities in the original data set. The algorithm is computationally efficient and could be implemented in real time on mobile devices with only 4-s latency.

  13. Testing of complementarity of PDA and MS detectors using chromatographic fingerprinting of genuine and counterfeit samples containing sildenafil citrate.

    PubMed

    Custers, Deborah; Krakowska, Barbara; De Beer, Jacques O; Courselle, Patricia; Daszykowski, Michal; Apers, Sandra; Deconinck, Eric

    2016-02-01

    Counterfeit medicines are a global threat to public health. High amounts enter the European market, which is why characterization of these products is a very important issue. In this study, a high-performance liquid chromatography-photodiode array (HPLC-PDA) and high-performance liquid chromatography-mass spectrometry (HPLC-MS) method were developed for the analysis of genuine Viagra®, generic products of Viagra®, and counterfeit samples in order to obtain different types of fingerprints. These data were included in the chemometric data analysis, aiming to test whether PDA and MS are complementary detection techniques. The MS data comprise both MS1 and MS2 fingerprints; the PDA data consist of fingerprints measured at three different wavelengths, i.e., 254, 270, and 290 nm, and all possible combinations of these wavelengths. First, it was verified if both groups of fingerprints can discriminate between genuine, generic, and counterfeit medicines separately; next, it was studied if the obtained results could be ameliorated by combining both fingerprint types. This data analysis showed that MS1 does not provide suitable classification models since several genuines and generics are classified as counterfeits and vice versa. However, when analyzing the MS1_MS2 data in combination with partial least squares-discriminant analysis (PLS-DA), a perfect discrimination was obtained. When only using data measured at 254 nm, good classification models can be obtained by k nearest neighbors (kNN) and soft independent modelling of class analogy (SIMCA), which might be interesting for the characterization of counterfeit drugs in developing countries. However, in general, the combination of PDA and MS data (254 nm_MS1) is preferred due to less classification errors between the genuines/generics and counterfeits compared to PDA and MS data separately.

  14. Characteristics of genomic signatures derived using univariate methods and mechanistically anchored functional descriptors for predicting drug- and xenobiotic-induced nephrotoxicity.

    PubMed

    Shi, Weiwei; Bugrim, Andrej; Nikolsky, Yuri; Nikolskya, Tatiana; Brennan, Richard J

    2008-01-01

    ABSTRACT The ideal toxicity biomarker is composed of the properties of prediction (is detected prior to traditional pathological signs of injury), accuracy (high sensitivity and specificity), and mechanistic relationships to the endpoint measured (biological relevance). Gene expression-based toxicity biomarkers ("signatures") have shown good predictive power and accuracy, but are difficult to interpret biologically. We have compared different statistical methods of feature selection with knowledge-based approaches, using GeneGo's database of canonical pathway maps, to generate gene sets for the classification of renal tubule toxicity. The gene set selection algorithms include four univariate analyses: t-statistics, fold-change, B-statistics, and RankProd, and their combination and overlap for the identification of differentially expressed probes. Enrichment analysis following the results of the four univariate analyses, Hotelling T-square test, and, finally out-of-bag selection, a variant of cross-validation, were used to identify canonical pathway maps-sets of genes coordinately involved in key biological processes-with classification power. Differentially expressed genes identified by the different statistical univariate analyses all generated reasonably performing classifiers of tubule toxicity. Maps identified by enrichment analysis or Hotelling T-square had lower classification power, but highlighted perturbed lipid homeostasis as a common discriminator of nephrotoxic treatments. The out-of-bag method yielded the best functionally integrated classifier. The map "ephrins signaling" performed comparably to a classifier derived using sparse linear programming, a machine learning algorithm, and represents a signaling network specifically involved in renal tubule development and integrity. Such functional descriptors of toxicity promise to better integrate predictive toxicogenomics with mechanistic analysis, facilitating the interpretation and risk assessment of predictive genomic investigations.

  15. Galaxy Zoo: quantitative visual morphological classifications for 48 000 galaxies from CANDELS

    NASA Astrophysics Data System (ADS)

    Simmons, B. D.; Lintott, Chris; Willett, Kyle W.; Masters, Karen L.; Kartaltepe, Jeyhan S.; Häußler, Boris; Kaviraj, Sugata; Krawczyk, Coleman; Kruk, S. J.; McIntosh, Daniel H.; Smethurst, R. J.; Nichol, Robert C.; Scarlata, Claudia; Schawinski, Kevin; Conselice, Christopher J.; Almaini, Omar; Ferguson, Henry C.; Fortson, Lucy; Hartley, William; Kocevski, Dale; Koekemoer, Anton M.; Mortlock, Alice; Newman, Jeffrey A.; Bamford, Steven P.; Grogin, N. A.; Lucas, Ray A.; Hathi, Nimish P.; McGrath, Elizabeth; Peth, Michael; Pforr, Janine; Rizer, Zachary; Wuyts, Stijn; Barro, Guillermo; Bell, Eric F.; Castellano, Marco; Dahlen, Tomas; Dekel, Avishai; Ownsworth, Jamie; Faber, Sandra M.; Finkelstein, Steven L.; Fontana, Adriano; Galametz, Audrey; Grützbauch, Ruth; Koo, David; Lotz, Jennifer; Mobasher, Bahram; Mozena, Mark; Salvato, Mara; Wiklind, Tommy

    2017-02-01

    We present quantified visual morphologies of approximately 48 000 galaxies observed in three Hubble Space Telescope legacy fields by the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) and classified by participants in the Galaxy Zoo project. 90 per cent of galaxies have z ≤ 3 and are observed in rest-frame optical wavelengths by CANDELS. Each galaxy received an average of 40 independent classifications, which we combine into detailed morphological information on galaxy features such as clumpiness, bar instabilities, spiral structure, and merger and tidal signatures. We apply a consensus-based classifier weighting method that preserves classifier independence while effectively down-weighting significantly outlying classifications. After analysing the effect of varying image depth on reported classifications, we also provide depth-corrected classifications which both preserve the information in the deepest observations and also enable the use of classifications at comparable depths across the full survey. Comparing the Galaxy Zoo classifications to previous classifications of the same galaxies shows very good agreement; for some applications, the high number of independent classifications provided by Galaxy Zoo provides an advantage in selecting galaxies with a particular morphological profile, while in others the combination of Galaxy Zoo with other classifications is a more promising approach than using any one method alone. We combine the Galaxy Zoo classifications of `smooth' galaxies with parametric morphologies to select a sample of featureless discs at 1 ≤ z ≤ 3, which may represent a dynamically warmer progenitor population to the settled disc galaxies seen at later epochs.

  16. Performance of fusion algorithms for computer-aided detection and classification of mines in very shallow water obtained from testing in navy Fleet Battle Exercise-Hotel 2000

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William; Kerfoot, Ian

    2001-10-01

    The performance of Computer Aided Detection/Computer Aided Classification (CAD/CAC) Fusion algorithms on side-scan sonar images was evaluated using data taken at the Navy's's Fleet Battle Exercise-Hotel held in Panama City, Florida, in August 2000. A 2-of-3 binary fusion algorithm is shown to provide robust performance. The algorithm accepts the classification decisions and associated contact locations form three different CAD/CAC algorithms, clusters the contacts based on Euclidian distance, and then declares a valid target when a clustered contact is declared by at least 2 of the 3 individual algorithms. This simple binary fusion provided a 96 percent probability of correct classification at a false alarm rate of 0.14 false alarms per image per side. The performance represented a 3.8:1 reduction in false alarms over the best performing single CAD/CAC algorithm, with no loss in probability of correct classification.

  17. Substance dependence and non-dependence in the Diagnostic and Statistical Manual of Mental Disorders (DSM) and the International Classification of Diseases (ICD): can an identical conceptualization be achieved?

    PubMed

    Saunders, John B

    2006-09-01

    This review summarizes the history of the development of diagnostic constructs that apply to repetitive substance use, and compares and contrasts the nature, psychometric performance and utility of the major diagnoses in the Diagnostic and Statistical Manual of Mental Disorders (DSM) and International Classification of Diseases (ICD) diagnostic systems. The available literature was reviewed with a particular focus on diagnostic concepts that are relevant for clinical and epidemiological practice, and so that research questions could be generated that might inform the development of the next generation of DSM and ICD diagnoses. The substance dependence syndrome is a psychometrically robust and clinically useful construct, which applies to a range of psychoactive substances. The differences between the DSM fourth edition (DSM-IV) and the ICD tenth edition (ICD-10) versions are minimal and could be resolved. DSM-IV substance abuse performs moderately well but, being defined essentially by social criteria, may be culture-dependent. ICD-10 harmful substance use performs poorly as a diagnostic entity. There are good prospects for resolving many of the differences between the DSM and ICD systems. A new non-dependence diagnosis is required. There would also be advantages in a subthreshold diagnosis of hazardous or risky substance use being incorporated into the two systems. Biomedical research can be drawn upon to define a psychophysiological 'driving force' which could underpin a broad spectrum of substance use disorders.

  18. Using Contact Forces and Robot Arm Accelerations to Automatically Rate Surgeon Skill at Peg Transfer.

    PubMed

    Brown, Jeremy D; O Brien, Conor E; Leung, Sarah C; Dumon, Kristoffel R; Lee, David I; Kuchenbecker, Katherine J

    2017-09-01

    Most trainees begin learning robotic minimally invasive surgery by performing inanimate practice tasks with clinical robots such as the Intuitive Surgical da Vinci. Expert surgeons are commonly asked to evaluate these performances using standardized five-point rating scales, but doing such ratings is time consuming, tedious, and somewhat subjective. This paper presents an automatic skill evaluation system that analyzes only the contact force with the task materials, the broad-bandwidth accelerations of the robotic instruments and camera, and the task completion time. We recruited N = 38 participants of varying skill in robotic surgery to perform three trials of peg transfer with a da Vinci Standard robot instrumented with our Smart Task Board. After calibration, three individuals rated these trials on five domains of the Global Evaluative Assessment of Robotic Skill (GEARS) structured assessment tool, providing ground-truth labels for regression and classification machine learning algorithms that predict GEARS scores based on the recorded force, acceleration, and time signals. Both machine learning approaches produced scores on the reserved testing sets that were in good to excellent agreement with the human raters, even when the force information was not considered. Furthermore, regression predicted GEARS scores more accurately and efficiently than classification. A surgeon's skill at robotic peg transfer can be reliably rated via regression using features gathered from force, acceleration, and time sensors external to the robot. We expect improved trainee learning as a result of providing these automatic skill ratings during inanimate task practice on a surgical robot.

  19. Land-cover classification in a moist tropical region of Brazil with Landsat TM imagery.

    PubMed

    Li, Guiying; Lu, Dengsheng; Moran, Emilio; Hetrick, Scott

    2011-01-01

    This research aims to improve land-cover classification accuracy in a moist tropical region in Brazil by examining the use of different remote sensing-derived variables and classification algorithms. Different scenarios based on Landsat Thematic Mapper (TM) spectral data and derived vegetation indices and textural images, and different classification algorithms - maximum likelihood classification (MLC), artificial neural network (ANN), classification tree analysis (CTA), and object-based classification (OBC), were explored. The results indicated that a combination of vegetation indices as extra bands into Landsat TM multispectral bands did not improve the overall classification performance, but the combination of textural images was valuable for improving vegetation classification accuracy. In particular, the combination of both vegetation indices and textural images into TM multispectral bands improved overall classification accuracy by 5.6% and kappa coefficient by 6.25%. Comparison of the different classification algorithms indicated that CTA and ANN have poor classification performance in this research, but OBC improved primary forest and pasture classification accuracies. This research indicates that use of textural images or use of OBC are especially valuable for improving the vegetation classes such as upland and liana forest classes having complex stand structures and having relatively large patch sizes.

  20. Land-cover classification in a moist tropical region of Brazil with Landsat TM imagery

    PubMed Central

    LI, GUIYING; LU, DENGSHENG; MORAN, EMILIO; HETRICK, SCOTT

    2011-01-01

    This research aims to improve land-cover classification accuracy in a moist tropical region in Brazil by examining the use of different remote sensing-derived variables and classification algorithms. Different scenarios based on Landsat Thematic Mapper (TM) spectral data and derived vegetation indices and textural images, and different classification algorithms – maximum likelihood classification (MLC), artificial neural network (ANN), classification tree analysis (CTA), and object-based classification (OBC), were explored. The results indicated that a combination of vegetation indices as extra bands into Landsat TM multispectral bands did not improve the overall classification performance, but the combination of textural images was valuable for improving vegetation classification accuracy. In particular, the combination of both vegetation indices and textural images into TM multispectral bands improved overall classification accuracy by 5.6% and kappa coefficient by 6.25%. Comparison of the different classification algorithms indicated that CTA and ANN have poor classification performance in this research, but OBC improved primary forest and pasture classification accuracies. This research indicates that use of textural images or use of OBC are especially valuable for improving the vegetation classes such as upland and liana forest classes having complex stand structures and having relatively large patch sizes. PMID:22368311

  1. [CT morphometry for calcaneal fractures and comparison of the Zwipp and Sanders classifications].

    PubMed

    Andermahr, J; Jesch, A B; Helling, H J; Jubel, A; Fischbach, R; Rehm, K E

    2002-01-01

    The aim of the study is to correlate the CT-morphological changes of fractured calcaneus and the classifications of Zwipp and Sanders with the clinical outcome. In a retrospective clinical study, the preoperative CT scans of 75 calcaneal fractures were analysed. The morphometry of the fractures was determined by measuring height, length diameter and calcaneo-cuboidal angle in comparison to the intact contralateral side. At a mean of 38 months after trauma 44 patients were clinically followed-up. The data of CT image morphometry were correlated with the severity of fracture classified by Zwipp or Sanders as well as with the functional outcome. There was a good correlation between the fracture classifications and the morphometric data. Both fracture classifying systems have a predictive impact for functional outcome. The more exacting and accurate Zwipp classification considers the most important cofactors like involvement of the calcaneo-cuboidal joint, soft tissue damage, additional fractures etc. The Sanders classification is easier to use during clinical routine. The Zwipp classification includes more relevant cofactors (fracture of the calcaneo-cuboidal-joint, soft tissue swelling, etc.) and presents a higher correlation to the choice of therapy. Both classification systems present a prognostic impact concerning the clinical outcome.

  2. [Evaluation of scientific production in different subareas of Public Health: limits of the current model and contributions to the debate].

    PubMed

    Iriart, Jorge Alberto Bernstein; Deslandes, Suely Ferreira; Martin, Denise; Camargo, Kenneth Rochel de; Carvalho, Marilia Sá; Coeli, Cláudia Medina

    2015-10-01

    The aim of this study was to discuss the limits of the quantitative evaluation model for scientific production in Public Health. An analysis of the scientific production of professors from the various subareas of Public Health was performed for 2010-2012. Distributions of the mean annual score for professors were compared according to subareas. The study estimated the likelihood that 60% of the professors in the graduate studies programs scored P50 (Very Good) or higher in their area. Professors of Epidemiology showed a significantly higher median annual score. Graduate studies programs whose faculty included at least 60% of Epidemiology professors and fewer than 10% from the subarea Social and Human Sciences in Health were significantly more likely to achieve a "Very Good" classification. The observed inequalities in scientific production between different subareas of Public Health point to the need to rethink their evaluation in order to avoid reproducing iniquities that have harmful consequences for the field's diversity.

  3. Minimization of annotation work: diagnosis of mammographic masses via active learning

    NASA Astrophysics Data System (ADS)

    Zhao, Yu; Zhang, Jingyang; Xie, Hongzhi; Zhang, Shuyang; Gu, Lixu

    2018-06-01

    The prerequisite for establishing an effective prediction system for mammographic diagnosis is the annotation of each mammographic image. The manual annotation work is time-consuming and laborious, which becomes a great hindrance for researchers. In this article, we propose a novel active learning algorithm that can adequately address this problem, leading to the minimization of the labeling costs on the premise of guaranteed performance. Our proposed method is different from the existing active learning methods designed for the general problem as it is specifically designed for mammographic images. Through its modified discriminant functions and improved sample query criteria, the proposed method can fully utilize the pairing of mammographic images and select the most valuable images from both the mediolateral and craniocaudal views. Moreover, in order to extend active learning to the ordinal regression problem, which has no precedent in existing studies, but is essential for mammographic diagnosis (mammographic diagnosis is not only a classification task, but also an ordinal regression task for predicting an ordinal variable, viz. the malignancy risk of lesions), multiple sample query criteria need to be taken into consideration simultaneously. We formulate it as a criteria integration problem and further present an algorithm based on self-adaptive weighted rank aggregation to achieve a good solution. The efficacy of the proposed method was demonstrated on thousands of mammographic images from the digital database for screening mammography. The labeling costs of obtaining optimal performance in the classification and ordinal regression task respectively fell to 33.8 and 19.8 percent of their original costs. The proposed method also generated 1228 wins, 369 ties and 47 losses for the classification task, and 1933 wins, 258 ties and 185 losses for the ordinal regression task compared to the other state-of-the-art active learning algorithms. By taking the particularities of mammographic images, the proposed AL method can indeed reduce the manual annotation work to a great extent without sacrificing the performance of the prediction system for mammographic diagnosis.

  4. Minimization of annotation work: diagnosis of mammographic masses via active learning.

    PubMed

    Zhao, Yu; Zhang, Jingyang; Xie, Hongzhi; Zhang, Shuyang; Gu, Lixu

    2018-05-22

    The prerequisite for establishing an effective prediction system for mammographic diagnosis is the annotation of each mammographic image. The manual annotation work is time-consuming and laborious, which becomes a great hindrance for researchers. In this article, we propose a novel active learning algorithm that can adequately address this problem, leading to the minimization of the labeling costs on the premise of guaranteed performance. Our proposed method is different from the existing active learning methods designed for the general problem as it is specifically designed for mammographic images. Through its modified discriminant functions and improved sample query criteria, the proposed method can fully utilize the pairing of mammographic images and select the most valuable images from both the mediolateral and craniocaudal views. Moreover, in order to extend active learning to the ordinal regression problem, which has no precedent in existing studies, but is essential for mammographic diagnosis (mammographic diagnosis is not only a classification task, but also an ordinal regression task for predicting an ordinal variable, viz. the malignancy risk of lesions), multiple sample query criteria need to be taken into consideration simultaneously. We formulate it as a criteria integration problem and further present an algorithm based on self-adaptive weighted rank aggregation to achieve a good solution. The efficacy of the proposed method was demonstrated on thousands of mammographic images from the digital database for screening mammography. The labeling costs of obtaining optimal performance in the classification and ordinal regression task respectively fell to 33.8 and 19.8 percent of their original costs. The proposed method also generated 1228 wins, 369 ties and 47 losses for the classification task, and 1933 wins, 258 ties and 185 losses for the ordinal regression task compared to the other state-of-the-art active learning algorithms. By taking the particularities of mammographic images, the proposed AL method can indeed reduce the manual annotation work to a great extent without sacrificing the performance of the prediction system for mammographic diagnosis.

  5. Classifying High-noise EEG in Complex Environments for Brain-computer Interaction Technologies

    DTIC Science & Technology

    2012-02-01

    differentiation in the brain signal that our classification approach seeks to identify despite the noise in the recorded EEG signal and the complexity of...performed two offline classifications , one using BCILab (1), the other using LibSVM (2). Distinct classifiers were trained for each individual in...order to improve individual classifier performance (3). The highest classification performance results were obtained using individual frequency bands

  6. 7 CFR 52.3760 - Color.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... be given. Canned ripe olives that fall into this classification shall not be graded above U.S. Grade... and typical of these styles prepared from olives of fairly good color. (iii) Broken pitted. The...

  7. 7 CFR 52.3760 - Color.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... be given. Canned ripe olives that fall into this classification shall not be graded above U.S. Grade... and typical of these styles prepared from olives of fairly good color. (iii) Broken pitted. The...

  8. Deep learning and non-negative matrix factorization in recognition of mammograms

    NASA Astrophysics Data System (ADS)

    Swiderski, Bartosz; Kurek, Jaroslaw; Osowski, Stanislaw; Kruk, Michal; Barhoumi, Walid

    2017-02-01

    This paper presents novel approach to the recognition of mammograms. The analyzed mammograms represent the normal and breast cancer (benign and malignant) cases. The solution applies the deep learning technique in image recognition. To obtain increased accuracy of classification the nonnegative matrix factorization and statistical self-similarity of images are applied. The images reconstructed by using these two approaches enrich the data base and thanks to this improve of quality measures of mammogram recognition (increase of accuracy, sensitivity and specificity). The results of numerical experiments performed on large DDSM data base containing more than 10000 mammograms have confirmed good accuracy of class recognition, exceeding the best results reported in the actual publications for this data base.

  9. Classification of Tree Species in Overstorey Canopy of Subtropical Forest Using QuickBird Images.

    PubMed

    Lin, Chinsu; Popescu, Sorin C; Thomson, Gavin; Tsogt, Khongor; Chang, Chein-I

    2015-01-01

    This paper proposes a supervised classification scheme to identify 40 tree species (2 coniferous, 38 broadleaf) belonging to 22 families and 36 genera in high spatial resolution QuickBird multispectral images (HMS). Overall kappa coefficient (OKC) and species conditional kappa coefficients (SCKC) were used to evaluate classification performance in training samples and estimate accuracy and uncertainty in test samples. Baseline classification performance using HMS images and vegetation index (VI) images were evaluated with an OKC value of 0.58 and 0.48 respectively, but performance improved significantly (up to 0.99) when used in combination with an HMS spectral-spatial texture image (SpecTex). One of the 40 species had very high conditional kappa coefficient performance (SCKC ≥ 0.95) using 4-band HMS and 5-band VIs images, but, only five species had lower performance (0.68 ≤ SCKC ≤ 0.94) using the SpecTex images. When SpecTex images were combined with a Visible Atmospherically Resistant Index (VARI), there was a significant improvement in performance in the training samples. The same level of improvement could not be replicated in the test samples indicating that a high degree of uncertainty exists in species classification accuracy which may be due to individual tree crown density, leaf greenness (inter-canopy gaps), and noise in the background environment (intra-canopy gaps). These factors increase uncertainty in the spectral texture features and therefore represent potential problems when using pixel-based classification techniques for multi-species classification.

  10. Models of Marine Fish Biodiversity: Assessing Predictors from Three Habitat Classification Schemes.

    PubMed

    Yates, Katherine L; Mellin, Camille; Caley, M Julian; Radford, Ben T; Meeuwig, Jessica J

    2016-01-01

    Prioritising biodiversity conservation requires knowledge of where biodiversity occurs. Such knowledge, however, is often lacking. New technologies for collecting biological and physical data coupled with advances in modelling techniques could help address these gaps and facilitate improved management outcomes. Here we examined the utility of environmental data, obtained using different methods, for developing models of both uni- and multivariate biodiversity metrics. We tested which biodiversity metrics could be predicted best and evaluated the performance of predictor variables generated from three types of habitat data: acoustic multibeam sonar imagery, predicted habitat classification, and direct observer habitat classification. We used boosted regression trees (BRT) to model metrics of fish species richness, abundance and biomass, and multivariate regression trees (MRT) to model biomass and abundance of fish functional groups. We compared model performance using different sets of predictors and estimated the relative influence of individual predictors. Models of total species richness and total abundance performed best; those developed for endemic species performed worst. Abundance models performed substantially better than corresponding biomass models. In general, BRT and MRTs developed using predicted habitat classifications performed less well than those using multibeam data. The most influential individual predictor was the abiotic categorical variable from direct observer habitat classification and models that incorporated predictors from direct observer habitat classification consistently outperformed those that did not. Our results show that while remotely sensed data can offer considerable utility for predictive modelling, the addition of direct observer habitat classification data can substantially improve model performance. Thus it appears that there are aspects of marine habitats that are important for modelling metrics of fish biodiversity that are not fully captured by remotely sensed data. As such, the use of remotely sensed data to model biodiversity represents a compromise between model performance and data availability.

  11. Models of Marine Fish Biodiversity: Assessing Predictors from Three Habitat Classification Schemes

    PubMed Central

    Yates, Katherine L.; Mellin, Camille; Caley, M. Julian; Radford, Ben T.; Meeuwig, Jessica J.

    2016-01-01

    Prioritising biodiversity conservation requires knowledge of where biodiversity occurs. Such knowledge, however, is often lacking. New technologies for collecting biological and physical data coupled with advances in modelling techniques could help address these gaps and facilitate improved management outcomes. Here we examined the utility of environmental data, obtained using different methods, for developing models of both uni- and multivariate biodiversity metrics. We tested which biodiversity metrics could be predicted best and evaluated the performance of predictor variables generated from three types of habitat data: acoustic multibeam sonar imagery, predicted habitat classification, and direct observer habitat classification. We used boosted regression trees (BRT) to model metrics of fish species richness, abundance and biomass, and multivariate regression trees (MRT) to model biomass and abundance of fish functional groups. We compared model performance using different sets of predictors and estimated the relative influence of individual predictors. Models of total species richness and total abundance performed best; those developed for endemic species performed worst. Abundance models performed substantially better than corresponding biomass models. In general, BRT and MRTs developed using predicted habitat classifications performed less well than those using multibeam data. The most influential individual predictor was the abiotic categorical variable from direct observer habitat classification and models that incorporated predictors from direct observer habitat classification consistently outperformed those that did not. Our results show that while remotely sensed data can offer considerable utility for predictive modelling, the addition of direct observer habitat classification data can substantially improve model performance. Thus it appears that there are aspects of marine habitats that are important for modelling metrics of fish biodiversity that are not fully captured by remotely sensed data. As such, the use of remotely sensed data to model biodiversity represents a compromise between model performance and data availability. PMID:27333202

  12. Clinical application of qualitative assessment for breast masses in shear-wave elastography.

    PubMed

    Gweon, Hye Mi; Youk, Ji Hyun; Son, Eun Ju; Kim, Jeong-Ah

    2013-11-01

    To evaluate the interobserver agreement and the diagnostic performance of various qualitative features in shear-wave elastography (SWE) for breast masses. A total of 153 breast lesions in 152 women who underwent B-mode ultrasound and SWE before biopsy were included. Qualitative analysis in SWE was performed using two different classifications: E values (Ecol; 6-point color score, Ehomo; homogeneity score and Esha; shape score) and a four-color pattern classification. Two radiologists reviewed five data sets: B-mode ultrasound, SWE, and combination of both for E values and four-color pattern. The BI-RADS categories were assessed B-mode and combined sets. Interobserver agreement was assessed using weighted κ statistics. Areas under the receiver operating characteristic curve (AUC), sensitivity, and specificity were analyzed. Interobserver agreement was substantial for Ecol (κ=0.79), Ehomo (κ=0.77) and four-color pattern (κ=0.64), and moderate for Esha (κ=0.56). Better-performing qualitative features were Ecol and four-color pattern (AUCs, 0.932 and 0.925) compared with Ehomo and Esha (AUCs, 0.857 and 0.864; P<0.05). The diagnostic performance of B-mode ultrasound (AUC, 0.950) was not significantly different from combined sets with E value and with four color pattern (AUCs, 0.962 and 0.954). When all qualitative values were negative, leading to downgrade the BI-RADS category, the specificity increased significantly from 16.5% to 56.1% (E value) and 57.0% (four-color pattern) (P<0.001) without improvement in sensitivity. The qualitative SWE features were highly reproducible and showed good diagnostic performance in suspicious breast masses. Adding qualitative SWE to B-mode ultrasound increased specificity in decision making for biopsy recommendation. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Adaptive phase k-means algorithm for waveform classification

    NASA Astrophysics Data System (ADS)

    Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin

    2018-01-01

    Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.

  14. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  15. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2000-12-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  16. The normative structure of mathematization in systematic biology.

    PubMed

    Sterner, Beckett; Lidgard, Scott

    2014-06-01

    We argue that the mathematization of science should be understood as a normative activity of advocating for a particular methodology with its own criteria for evaluating good research. As a case study, we examine the mathematization of taxonomic classification in systematic biology. We show how mathematization is a normative activity by contrasting its distinctive features in numerical taxonomy in the 1960s with an earlier reform advocated by Ernst Mayr starting in the 1940s. Both Mayr and the numerical taxonomists sought to formalize the work of classification, but Mayr introduced a qualitative formalism based on human judgment for determining the taxonomic rank of populations, while the numerical taxonomists introduced a quantitative formalism based on automated procedures for computing classifications. The key contrast between Mayr and the numerical taxonomists is how they conceptualized the temporal structure of the workflow of classification, specifically where they allowed meta-level discourse about difficulties in producing the classification. Copyright © 2014. Published by Elsevier Ltd.

  17. Fuzzy support vector machine: an efficient rule-based classification technique for microarrays.

    PubMed

    Hajiloo, Mohsen; Rabiee, Hamid R; Anooshahpour, Mahdi

    2013-01-01

    The abundance of gene expression microarray data has led to the development of machine learning algorithms applicable for tackling disease diagnosis, disease prognosis, and treatment selection problems. However, these algorithms often produce classifiers with weaknesses in terms of accuracy, robustness, and interpretability. This paper introduces fuzzy support vector machine which is a learning algorithm based on combination of fuzzy classifiers and kernel machines for microarray classification. Experimental results on public leukemia, prostate, and colon cancer datasets show that fuzzy support vector machine applied in combination with filter or wrapper feature selection methods develops a robust model with higher accuracy than the conventional microarray classification models such as support vector machine, artificial neural network, decision trees, k nearest neighbors, and diagonal linear discriminant analysis. Furthermore, the interpretable rule-base inferred from fuzzy support vector machine helps extracting biological knowledge from microarray data. Fuzzy support vector machine as a new classification model with high generalization power, robustness, and good interpretability seems to be a promising tool for gene expression microarray classification.

  18. Application of partial least squares near-infrared spectral classification in diabetic identification

    NASA Astrophysics Data System (ADS)

    Yan, Wen-juan; Yang, Ming; He, Guo-quan; Qin, Lin; Li, Gang

    2014-11-01

    In order to identify the diabetic patients by using tongue near-infrared (NIR) spectrum - a spectral classification model of the NIR reflectivity of the tongue tip is proposed, based on the partial least square (PLS) method. 39sample data of tongue tip's NIR spectra are harvested from healthy people and diabetic patients , respectively. After pretreatment of the reflectivity, the spectral data are set as the independent variable matrix, and information of classification as the dependent variables matrix, Samples were divided into two groups - i.e. 53 samples as calibration set and 25 as prediction set - then the PLS is used to build the classification model The constructed modelfrom the 53 samples has the correlation of 0.9614 and the root mean square error of cross-validation (RMSECV) of 0.1387.The predictions for the 25 samples have the correlation of 0.9146 and the RMSECV of 0.2122.The experimental result shows that the PLS method can achieve good classification on features of healthy people and diabetic patients.

  19. A new adaptive L1-norm for optimal descriptor selection of high-dimensional QSAR classification model for anti-hepatitis C virus activity of thiourea derivatives.

    PubMed

    Algamal, Z Y; Lee, M H

    2017-01-01

    A high-dimensional quantitative structure-activity relationship (QSAR) classification model typically contains a large number of irrelevant and redundant descriptors. In this paper, a new design of descriptor selection for the QSAR classification model estimation method is proposed by adding a new weight inside L1-norm. The experimental results of classifying the anti-hepatitis C virus activity of thiourea derivatives demonstrate that the proposed descriptor selection method in the QSAR classification model performs effectively and competitively compared with other existing penalized methods in terms of classification performance on both the training and the testing datasets. Moreover, it is noteworthy that the results obtained in terms of stability test and applicability domain provide a robust QSAR classification model. It is evident from the results that the developed QSAR classification model could conceivably be employed for further high-dimensional QSAR classification studies.

  20. [Evaluation of new and emerging health technologies. Proposal for classification].

    PubMed

    Prados-Torres, J D; Vidal-España, F; Barnestein-Fonseca, P; Gallo-García, C; Irastorza-Aldasoro, A; Leiva-Fernández, F

    2011-01-01

    Review and develop a proposal for the classification of health technologies (HT) evaluated by the Health Technology Assessment Agencies (HTAA). Peer review of AETS of the previous proposed classification of HT. Analysis of their input and suggestions for amendments. Construction of a new classification. Pilot study with physicians. Andalusian Public Health System. Spanish HTAA. Experts from HTAA. Tutors of family medicine residents. HT Update classification previously made by the research team. Peer review by Spanish HTAA. Qualitative and quantitative analysis of responses. Construction of a new and pilot study based on 12 evaluation reports of the HTAA. We obtained 11 thematic categories that are classified into 6 major head groups: 1, prevention technology; 2, diagnostic technology; 3, therapeutic technologies; 4, diagnostic and therapeutic technologies; 5, organizational technology, and 6, knowledge management and quality of care. In the pilot there was a good concordance in the classification of 8 of the 12 reports reviewed by physicians. Experts agree on 11 thematic categories of HT. A new classification of HT with double entry (Nature and purpose of HT) is proposed. APPLICABILITY: According to experts, the classification of the work of the HTAA may represent a useful tool to transfer and manage knowledge. Moreover, an adequate classification of the HTAA reports would help clinicians and other potential users to locate them and this can facilitate their dissemination. Copyright © 2010 SECA. Published by Elsevier Espana. All rights reserved.

  1. An AIS-Based E-mail Classification Method

    NASA Astrophysics Data System (ADS)

    Qing, Jinjian; Mao, Ruilong; Bie, Rongfang; Gao, Xiao-Zhi

    This paper proposes a new e-mail classification method based on the Artificial Immune System (AIS), which is endowed with good diversity and self-adaptive ability by using the immune learning, immune memory, and immune recognition. In our method, the features of spam and non-spam extracted from the training sets are combined together, and the number of false positives (non-spam messages that are incorrectly classified as spam) can be reduced. The experimental results demonstrate that this method is effective in reducing the false rate.

  2. On the Discriminant Analysis in the 2-Populations Case

    NASA Astrophysics Data System (ADS)

    Rublík, František

    2008-01-01

    The empirical Bayes Gaussian rule, which in the normal case yields good values of the probability of total error, may yield high values of the maximum probability error. From this point of view the presented modified version of the classification rule of Broffitt, Randles and Hogg appears to be superior. The modification included in this paper is termed as a WR method, and the choice of its weights is discussed. The mentioned methods are also compared with the K nearest neighbours classification rule.

  3. Classification of PSN J12015272-1852183 as a young type Ic SN

    NASA Astrophysics Data System (ADS)

    Harutyunyan, A.; Benetti, S.; Pastorello, A.; Cappellaro, E.; Tomasella, L.; Ochner, P.; Turatto, M.

    2013-06-01

    We report the spectroscopic classification (range 335-785 nm; resolution 1.5 nm) of PSN J12015272-1852183 discovered by the CHASE project on June 22.12 UT. The spectrogram obtained on June 23.88 UT with the TNG Telescope (+Dolores), shows that this is a type-Ic supernova. A good match is found with the type-Ic supernova 1994I (Millard et al 1999, ApJ 527, 746) at about six days before maximum light.

  4. Innovative vehicle classification strategies : using LIDAR to do more for less.

    DOT National Transportation Integrated Search

    2012-06-23

    This study examines LIDAR (light detection and ranging) based vehicle classification and classification : performance monitoring. First, we develop a portable LIDAR based vehicle classification system that can : be rapidly deployed, and then we use t...

  5. Impact of atmospheric correction and image filtering on hyperspectral classification of tree species using support vector machine

    NASA Astrophysics Data System (ADS)

    Shahriari Nia, Morteza; Wang, Daisy Zhe; Bohlman, Stephanie Ann; Gader, Paul; Graves, Sarah J.; Petrovic, Milenko

    2015-01-01

    Hyperspectral images can be used to identify savannah tree species at the landscape scale, which is a key step in measuring biomass and carbon, and tracking changes in species distributions, including invasive species, in these ecosystems. Before automated species mapping can be performed, image processing and atmospheric correction is often performed, which can potentially affect the performance of classification algorithms. We determine how three processing and correction techniques (atmospheric correction, Gaussian filters, and shade/green vegetation filters) affect the prediction accuracy of classification of tree species at pixel level from airborne visible/infrared imaging spectrometer imagery of longleaf pine savanna in Central Florida, United States. Species classification using fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) atmospheric correction outperformed ATCOR in the majority of cases. Green vegetation (normalized difference vegetation index) and shade (near-infrared) filters did not increase classification accuracy when applied to large and continuous patches of specific species. Finally, applying a Gaussian filter reduces interband noise and increases species classification accuracy. Using the optimal preprocessing steps, our classification accuracy of six species classes is about 75%.

  6. Pleiotrophin levels are associated with improved coronary collateral circulation.

    PubMed

    Türker Duyuler, Pinar; Duyuler, Serkan; Gök, Murat; Kundi, Harun; Topçuoğlu, Canan; Güray, Ümit

    2018-01-01

    Elucidation of the underlying mechanisms of angiogenesis and arteriogenesis in coronary collateral formation is necessary for new therapies. Pleiotrophin is a secreted multifunctional cytokine and associated with the formation of functional cardiovascular neovascularization in a series of experimental animal models. We aimed to evaluate the serum levels of pleiotrophin in patients with chronic total coronary artery occlusion and poor or good collateral development. We included 88 consecutive patients (mean age of the entire population: 63.7±12.1 years, 68 male patients) with stable angina pectoris who underwent coronary angiography and had chronic total occlusion in at least one major coronary artery. Collateral grading was performed according to the Rentrop classification. After grading, patients were divided into poor collateral circulation (Rentrop grade 0 and 1) and good collateral circulation (Rentrop grades 2 and 3) groups. Serum pleiotrophin levels were measured using a commercial human ELISA kit. Fifty-eight patients had good and 30 patients had poor coronary collaterals. The good collateral group had higher serum pleiotrophin levels than the poor collateral group (690.1±187.9 vs. 415.3±165.9 ng/ml, P<0.001). Pleiotrophin levels were higher with higher Rentrop grade (P<0.001). In multivariate analysis, increased pleiotrophin was associated independently with good collateral development (odds ratio: 1.007; confidence interval: 1.003-1.012; P=0.002). This study showed that increased serum pleiotrophin levels are associated with better developed coronary collateral circulation. Further studies are needed to better understand the relationship.

  7. Time-reversal imaging for classification of submerged elastic targets via Gibbs sampling and the Relevance Vector Machine.

    PubMed

    Dasgupta, Nilanjan; Carin, Lawrence

    2005-04-01

    Time-reversal imaging (TRI) is analogous to matched-field processing, although TRI is typically very wideband and is appropriate for subsequent target classification (in addition to localization). Time-reversal techniques, as applied to acoustic target classification, are highly sensitive to channel mismatch. Hence, it is crucial to estimate the channel parameters before time-reversal imaging is performed. The channel-parameter statistics are estimated here by applying a geoacoustic inversion technique based on Gibbs sampling. The maximum a posteriori (MAP) estimate of the channel parameters are then used to perform time-reversal imaging. Time-reversal implementation requires a fast forward model, implemented here by a normal-mode framework. In addition to imaging, extraction of features from the time-reversed images is explored, with these applied to subsequent target classification. The classification of time-reversed signatures is performed by the relevance vector machine (RVM). The efficacy of the technique is analyzed on simulated in-channel data generated by a free-field finite element method (FEM) code, in conjunction with a channel propagation model, wherein the final classification performance is demonstrated to be relatively insensitive to the associated channel parameters. The underlying theory of Gibbs sampling and TRI are presented along with the feature extraction and target classification via the RVM.

  8. Dimensionality-varied deep convolutional neural network for spectral-spatial classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Qu, Haicheng; Liang, Xuejian; Liang, Shichao; Liu, Wanjun

    2018-01-01

    Many methods of hyperspectral image classification have been proposed recently, and the convolutional neural network (CNN) achieves outstanding performance. However, spectral-spatial classification of CNN requires an excessively large model, tremendous computations, and complex network, and CNN is generally unable to use the noisy bands caused by water-vapor absorption. A dimensionality-varied CNN (DV-CNN) is proposed to address these issues. There are four stages in DV-CNN and the dimensionalities of spectral-spatial feature maps vary with the stages. DV-CNN can reduce the computation and simplify the structure of the network. All feature maps are processed by more kernels in higher stages to extract more precise features. DV-CNN also improves the classification accuracy and enhances the robustness to water-vapor absorption bands. The experiments are performed on data sets of Indian Pines and Pavia University scene. The classification performance of DV-CNN is compared with state-of-the-art methods, which contain the variations of CNN, traditional, and other deep learning methods. The experiment of performance analysis about DV-CNN itself is also carried out. The experimental results demonstrate that DV-CNN outperforms state-of-the-art methods for spectral-spatial classification and it is also robust to water-vapor absorption bands. Moreover, reasonable parameters selection is effective to improve classification accuracy.

  9. Gender classification under extended operating conditions

    NASA Astrophysics Data System (ADS)

    Rude, Howard N.; Rizki, Mateen

    2014-06-01

    Gender classification is a critical component of a robust image security system. Many techniques exist to perform gender classification using facial features. In contrast, this paper explores gender classification using body features extracted from clothed subjects. Several of the most effective types of features for gender classification identified in literature were implemented and applied to the newly developed Seasonal Weather And Gender (SWAG) dataset. SWAG contains video clips of approximately 2000 samples of human subjects captured over a period of several months. The subjects are wearing casual business attire and outer garments appropriate for the specific weather conditions observed in the Midwest. The results from a series of experiments are presented that compare the classification accuracy of systems that incorporate various types and combinations of features applied to multiple looks at subjects at different image resolutions to determine a baseline performance for gender classification.

  10. Using Optimization to Improve Test Planning

    DTIC Science & Technology

    2017-09-01

    friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool for the test and... evaluation schedulers. 14. SUBJECT TERMS schedule optimization, test planning 15. NUMBER OF PAGES 223 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...make the input more user-friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool

  11. 14 CFR Sec. 19-4 - Service classes.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... a composite of first class, coach, and mixed passenger/cargo service. The following classifications... integral part of services performed pursuant to published flight schedules. The following classifications... Classifications Sec. 19-4 Service classes. The statistical classifications are designed to reflect the operating...

  12. 14 CFR Sec. 19-4 - Service classes.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... a composite of first class, coach, and mixed passenger/cargo service. The following classifications... integral part of services performed pursuant to published flight schedules. The following classifications... Classifications Sec. 19-4 Service classes. The statistical classifications are designed to reflect the operating...

  13. Toward Automated Cochlear Implant Fitting Procedures Based on Event-Related Potentials.

    PubMed

    Finke, Mareike; Billinger, Martin; Büchner, Andreas

    Cochlear implants (CIs) restore hearing to the profoundly deaf by direct electrical stimulation of the auditory nerve. To provide an optimal electrical stimulation pattern the CI must be individually fitted to each CI user. To date, CI fitting is primarily based on subjective feedback from the user. However, not all CI users are able to provide such feedback, for example, small children. This study explores the possibility of using the electroencephalogram (EEG) to objectively determine if CI users are able to hear differences in tones presented to them, which has potential applications in CI fitting or closed loop systems. Deviant and standard stimuli were presented to 12 CI users in an active auditory oddball paradigm. The EEG was recorded in two sessions and classification of the EEG data was performed with shrinkage linear discriminant analysis. Also, the impact of CI artifact removal on classification performance and the possibility to reuse a trained classifier in future sessions were evaluated. Overall, classification performance was above chance level for all participants although performance varied considerably between participants. Also, artifacts were successfully removed from the EEG without impairing classification performance. Finally, reuse of the classifier causes only a small loss in classification performance. Our data provide first evidence that EEG can be automatically classified on single-trial basis in CI users. Despite the slightly poorer classification performance over sessions, classifier and CI artifact correction appear stable over successive sessions. Thus, classifier and artifact correction weights can be reused without repeating the set-up procedure in every session, which makes the technique easier applicable. With our present data, we can show successful classification of event-related cortical potential patterns in CI users. In the future, this has the potential to objectify and automate parts of CI fitting procedures.

  14. Reliability testing of the Larsen and Sharp classifications for rheumatoid arthritis of the elbow.

    PubMed

    Jew, Nicholas B; Hollins, Anthony M; Mauck, Benjamin M; Smith, Richard A; Azar, Frederick M; Miller, Robert H; Throckmorton, Thomas W

    2017-01-01

    Two popular systems for classifying rheumatoid arthritis affecting the elbow are the Larsen and Sharp schemes. To our knowledge, no study has investigated the reliability of these 2 systems. We compared the intraobserver and interobserver agreement of the 2 systems to determine whether one is more reliable than the other. The radiographs of 45 patients diagnosed with rheumatoid arthritis affecting the elbow were evaluated. Anteroposterior and lateral radiographs were deidentified and distributed to 6 evaluators (4 fellowship-trained upper extremity surgeons and 2 orthopedic trainees). Each evaluator graded all 45 radiographs according to the Larsen and Sharp scoring methods on 2 occasions, at least 2 weeks apart. Overall intraobserver reliability was 0.93 (95% confidence interval [CI], 0.90-0.95) for the Larsen system and 0.92 (95% CI, 0.86-0.96) for the Sharp classification, both indicating substantial agreement. Overall interobserver reliability was 0.70 (95% CI, 0.60-0.80) for the Larsen classification and 0.68 (95% CI, 0.54-0.81) for the Sharp system, both indicating good agreement. There were no significant differences in the intraobserver or interobserver reliability of the systems overall and no significant differences in reliability between attending surgeons and trainees for either classification system. The Larsen and Sharp systems both show substantial intraobserver reliability and good interobserver agreement for the radiographic classification of rheumatoid arthritis affecting the elbow. Differences in training level did not result in substantial variances in reliability for either system. We conclude that both systems can be reliably used to evaluate rheumatoid arthritis of the elbow by observers of varying training levels. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  15. Reliability of the European Society of Human Reproduction and Embryology/European Society for Gynaecological Endoscopy and American Society for Reproductive Medicine classification systems for congenital uterine anomalies detected using three-dimensional ultrasonography.

    PubMed

    Ludwin, Artur; Ludwin, Inga; Kudla, Marek; Kottner, Jan

    2015-09-01

    To estimate the inter-rater/intrarater reliability of the European Society of Human Reproduction and Embryology/European Society for Gynaecological Endoscopy (ESHRE-ESGE) classification of congenital uterine malformations and to compare the results obtained with the reliability of the American Society for Reproductive Medicine (ASRM) classification supplemented with additional morphometric criteria. Reliability/agreement study. Private clinic. Uterine malformations (n = 50 patients, consecutively included) and normal uterus (n = 62 women, randomly selected) constituted the study. These were classified based on real-time three-dimensional ultrasound single volume transvaginal (or transrectal in the case of virgins, 4 cases) ultrasonography findings, which were assessed by an expert rater based on the ESHRE-ESGE criteria. The samples were obtained from women of reproductive age. Unprocessed three-dimensional datasets were independently evaluated offline by two experienced, blinded raters using both classification systems. The κ-values and proportions of agreement. Standardized interpretation indicated that the ESHRE-ESGE system has substantial/good or almost perfect/very good reliability (κ >0.60 and >0.80), but the interpretation of the clinically relevant cutoffs of κ-values showed insufficient reliability for clinical use (κ < 0.90), especially in the diagnosis of septate uterus. The ASRM system had sufficient reliability (κ > 0.95). The low reliability of the ESHRE-ESGE system may lead to a lack of consensus about the management of common uterine malformations and biased research interpretations. The use of the ASRM classification, supplemented with simple morphometric criteria, may be preferred if their sufficient reliability can be confirmed real-time in a large sample size. Copyright © 2015 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  16. Inter-observer variance with the diagnosis of myelodysplastic syndromes (MDS) following the 2008 WHO classification.

    PubMed

    Font, P; Loscertales, J; Benavente, C; Bermejo, A; Callejas, M; Garcia-Alonso, L; Garcia-Marcilla, A; Gil, S; Lopez-Rubio, M; Martin, E; Muñoz, C; Ricard, P; Soto, C; Balsalobre, P; Villegas, A

    2013-01-01

    Morphology is the basis of the diagnosis of myelodysplastic syndromes (MDS). The WHO classification offers prognostic information and helps with the treatment decisions. However, morphological changes are subject to potential inter-observer variance. The aim of our study was to explore the reliability of the 2008 WHO classification of MDS, reviewing 100 samples previously diagnosed with MDS using the 2001 WHO criteria. Specimens were collected from 10 hospitals and were evaluated by 10 morphologists, working in five pairs. Each observer evaluated 20 samples, and each sample was analyzed independently by two morphologists. The second observer was blinded to the clinical and laboratory data, except for the peripheral blood (PB) counts. Nineteen cases were considered as unclassified MDS (MDS-U) by the 2001 WHO classification, but only three remained as MDS-U by the 2008 WHO proposal. Discordance was observed in 26 of the 95 samples considered suitable (27 %). Although there were a high number of observers taking part, the rate of discordance was quite similar among the five pairs. The inter-observer concordance was very good regarding refractory anemia with excess blasts type 1 (RAEB-1) (10 of 12 cases, 84 %), RAEB-2 (nine of 10 cases, 90 %), and also good regarding refractory cytopenia with multilineage dysplasia (37 of 50 cases, 74 %). However, the categories with unilineage dysplasia were not reproducible in most of the cases. The rate of concordance with refractory cytopenia with unilineage dysplasia was 40 % (two of five cases) and 25 % with RA with ring sideroblasts (two of eight). Our results show that the 2008 WHO classification gives a more accurate stratification of MDS but also illustrates the difficulty in diagnosing MDS with unilineage dysplasia.

  17. Classification of debtor credit status and determination amount of credit risk by using linier discriminant function

    NASA Astrophysics Data System (ADS)

    Aidi, Muhammad Nur; Sari, Resty Indah

    2012-05-01

    A decision of credit that given by bank or another creditur must have a risk and it called credit risk. Credit risk is an investor's risk of loss arising from a borrower who does not make payments as promised. The substantial of credit risk can lead to losses for the banks and the debtor. To minimize this problem need a further study to identify a potential new customer before the decision given. Identification of debtor can using various approaches analysis, one of them is by using discriminant analysis. Discriminant analysis in this study are used to classify whether belonging to the debtor's good credit or bad credit. The result of this study are two discriminant functions that can identify new debtor. Before step built the discriminant function, selection of explanatory variables should be done. Purpose of selection independent variable is to choose the variable that can discriminate the group maximally. Selection variables in this study using different test, for categoric variable selection of variable using proportion chi-square test, and stepwise discriminant for numeric variable. The result of this study are two discriminant functions that can identify new debtor. The selected variables that can discriminating two groups of debtor maximally are status of existing checking account, credit history, credit amount, installment rate in percentage of disposable income, sex, age in year, other installment plans, and number of people being liable to provide maintenance. This classification produce a classification accuracy rate is good enough, that is equal to 74,70%. Debtor classification using discriminant analysis has risk level that is small enough, and it ranged beetwen 14,992% and 17,608%. Based on that credit risk rate, using discriminant analysis on the classification of credit status can be used effectively.

  18. Seasonal trends in separability of leaf reflectance spectra for Ailanthus altissima and four other tree species

    NASA Astrophysics Data System (ADS)

    Burkholder, Aaron

    This project investigated the spectral separability of the invasive species Ailanthus altissima, commonly called tree of heaven, and four other native species. Leaves were collected from Ailanthus and four native tree species from May 13 through August 24, 2008, and spectral reflectance factor measurements were gathered for each tree using an ASD (Boulder, Colorado) FieldSpec Pro full-range spectroradiometer. The original data covered the range from 350-2500 nm, with one reflectance measurement collected per one nm wavelength. To reduce dimensionality, the measurements were resampled to the actual resolution of the spectrometer's sensors, and regions of atmospheric absorption were removed. Continuum removal was performed on the reflectance data, resulting in a second dataset. For both the reflectance and continuum removed datasets, least angle regression (LARS) and random forest classification were used to identify a single set of optimal wavelengths across all sampled dates, a set of optimal wavelengths for each date, and the dates for which Ailanthus is most separable from other species. It was found that classification accuracy varies both with dates and bands used. Contrary to expectations that early spring would provide the best separability, the lowest classification error was observed on July 22 for the reflectance data, and on May 13, July 11 and August 1 for the continuum removed data. This suggests that July and August are also potentially good months for species differentiation. Applying continuum removal in many cases reduced classification error, although not consistently. Band selection seems to be more important for reflectance data in that it results in greater improvement in classification accuracy, and LARS appears to be an effective band selection tool. The optimal spectral bands were selected from across the spectrum, often with bands from the blue (401-431 nm), NIR (1115 nm) and SWIR (1985-1995 nm), suggesting that hyperspectral sensors with broad wavelength sensitivity are important for mapping and identification of Ailanthus.

  19. A Novel Characteristic Frequency Bands Extraction Method for Automatic Bearing Fault Diagnosis Based on Hilbert Huang Transform

    PubMed Central

    Yu, Xiao; Ding, Enjie; Chen, Chunxu; Liu, Xiaoming; Li, Li

    2015-01-01

    Because roller element bearings (REBs) failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC) to select salient features from the marginal spectrum of vibration signals by Hilbert–Huang Transform (HHT). In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS) into window spectrums, following which Rand Index (RI) criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs). Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines). The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU). The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500–800 and a m range of 50–300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR) = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault classification accuracy and a good performance in Gauss white noise reduction. PMID:26540059

  20. A Novel Characteristic Frequency Bands Extraction Method for Automatic Bearing Fault Diagnosis Based on Hilbert Huang Transform.

    PubMed

    Yu, Xiao; Ding, Enjie; Chen, Chunxu; Liu, Xiaoming; Li, Li

    2015-11-03

    Because roller element bearings (REBs) failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC) to select salient features from the marginal spectrum of vibration signals by Hilbert-Huang Transform (HHT). In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS) into window spectrums, following which Rand Index (RI) criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs). Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines). The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU). The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500-800 and a m range of 50-300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR) = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault classification accuracy and a good performance in Gauss white noise reduction.

Top