Sample records for robust event classification

  1. Continuous robust sound event classification using time-frequency features and deep learning

    PubMed Central

    Song, Yan; Xiao, Wei; Phan, Huy

    2017-01-01

    The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification. PMID:28892478

  2. Continuous robust sound event classification using time-frequency features and deep learning.

    PubMed

    McLoughlin, Ian; Zhang, Haomin; Xie, Zhipeng; Song, Yan; Xiao, Wei; Phan, Huy

    2017-01-01

    The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.

  3. Building a robust vehicle detection and classification module

    NASA Astrophysics Data System (ADS)

    Grigoryev, Anton; Khanipov, Timur; Koptelov, Ivan; Bocharov, Dmitry; Postnikov, Vassily; Nikolaev, Dmitry

    2015-12-01

    The growing adoption of intelligent transportation systems (ITS) and autonomous driving requires robust real-time solutions for various event and object detection problems. Most of real-world systems still cannot rely on computer vision algorithms and employ a wide range of costly additional hardware like LIDARs. In this paper we explore engineering challenges encountered in building a highly robust visual vehicle detection and classification module that works under broad range of environmental and road conditions. The resulting technology is competitive to traditional non-visual means of traffic monitoring. The main focus of the paper is on software and hardware architecture, algorithm selection and domain-specific heuristics that help the computer vision system avoid implausible answers.

  4. The LSST Data Mining Research Agenda

    NASA Astrophysics Data System (ADS)

    Borne, K.; Becla, J.; Davidson, I.; Szalay, A.; Tyson, J. A.

    2008-12-01

    We describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night) multi-resolution methods for exploration of petascale databases; indexing of multi-attribute multi-dimensional astronomical databases (beyond spatial indexing) for rapid querying of petabyte databases; and more.

  5. Multimodal Sparse Coding for Event Detection

    DTIC Science & Technology

    2015-10-13

    classification tasks based on single modality. We present multimodal sparse coding for learning feature representations shared across multiple modalities...The shared representa- tions are applied to multimedia event detection (MED) and evaluated in compar- ison to unimodal counterparts, as well as other...and video tracks from the same multimedia clip, we can force the two modalities to share a similar sparse representation whose benefit includes robust

  6. Secure access control and large scale robust representation for online multimedia event detection.

    PubMed

    Liu, Changyu; Lu, Bin; Li, Huiling

    2014-01-01

    We developed an online multimedia event detection (MED) system. However, there are a secure access control issue and a large scale robust representation issue when we want to integrate traditional event detection algorithms into the online environment. For the first issue, we proposed a tree proxy-based and service-oriented access control (TPSAC) model based on the traditional role based access control model. Verification experiments were conducted on the CloudSim simulation platform, and the results showed that the TPSAC model is suitable for the access control of dynamic online environments. For the second issue, inspired by the object-bank scene descriptor, we proposed a 1000-object-bank (1000OBK) event descriptor. Feature vectors of the 1000OBK were extracted from response pyramids of 1000 generic object detectors which were trained on standard annotated image datasets, such as the ImageNet dataset. A spatial bag of words tiling approach was then adopted to encode these feature vectors for bridging the gap between the objects and events. Furthermore, we performed experiments in the context of event classification on the challenging TRECVID MED 2012 dataset, and the results showed that the robust 1000OBK event descriptor outperforms the state-of-the-art approaches.

  7. Quality assurance: The 10-Group Classification System (Robson classification), induction of labor, and cesarean delivery.

    PubMed

    Robson, Michael; Murphy, Martina; Byrne, Fionnuala

    2015-10-01

    Quality assurance in labor and delivery is needed. The method must be simple and consistent, and be of universal value. It needs to be clinically relevant, robust, and prospective, and must incorporate epidemiological variables. The 10-Group Classification System (TGCS) is a simple method providing a common starting point for further detailed analysis within which all perinatal events and outcomes can be measured and compared. The system is demonstrated in the present paper using data for 2013 from the National Maternity Hospital in Dublin, Ireland. Interpretation of the classification can be easily taught. The standard table can provide much insight into the philosophy of care in the population of women studied and also provide information on data quality. With standardization of audit of events and outcomes, any differences in either sizes of groups, events or outcomes can be explained only by poor data collection, significant epidemiological variables, or differences in practice. In April 2015, WHO proposed that the TGCS (also known as the Robson classification) is used as a global standard for assessing, monitoring, and comparing cesarean delivery rates within and between healthcare facilities. Copyright © 2015. Published by Elsevier Ireland Ltd.

  8. Automatic Classification of volcano-seismic events based on Deep Neural Networks.

    NASA Astrophysics Data System (ADS)

    Titos Luzón, M.; Bueno Rodriguez, A.; Garcia Martinez, L.; Benitez, C.; Ibáñez, J. M.

    2017-12-01

    Seismic monitoring of active volcanoes is a popular remote sensing technique to detect seismic activity, often associated to energy exchanges between the volcano and the environment. As a result, seismographs register a wide range of volcano-seismic signals that reflect the nature and underlying physics of volcanic processes. Machine learning and signal processing techniques provide an appropriate framework to analyze such data. In this research, we propose a new classification framework for seismic events based on deep neural networks. Deep neural networks are composed by multiple processing layers, and can discover intrinsic patterns from the data itself. Internal parameters can be initialized using a greedy unsupervised pre-training stage, leading to an efficient training of fully connected architectures. We aim to determine the robustness of these architectures as classifiers of seven different types of seismic events recorded at "Volcán de Fuego" (Colima, Mexico). Two deep neural networks with different pre-training strategies are studied: stacked denoising autoencoder and deep belief networks. Results are compared to existing machine learning algorithms (SVM, Random Forest, Multilayer Perceptron). We used 5 LPC coefficients over three non-overlapping segments as training features in order to characterize temporal evolution, avoid redundancy and encode the signal, regardless of its duration. Experimental results show that deep architectures can classify seismic events with higher accuracy than classical algorithms, attaining up to 92% recognition accuracy. Pre-training initialization helps these models to detect events that occur simultaneously in time (such explosions and rockfalls), increase robustness against noisy inputs, and provide better generalization. These results demonstrate deep neural networks are robust classifiers, and can be deployed in real-environments to monitor the seismicity of restless volcanoes.

  9. Secure Access Control and Large Scale Robust Representation for Online Multimedia Event Detection

    PubMed Central

    Liu, Changyu; Li, Huiling

    2014-01-01

    We developed an online multimedia event detection (MED) system. However, there are a secure access control issue and a large scale robust representation issue when we want to integrate traditional event detection algorithms into the online environment. For the first issue, we proposed a tree proxy-based and service-oriented access control (TPSAC) model based on the traditional role based access control model. Verification experiments were conducted on the CloudSim simulation platform, and the results showed that the TPSAC model is suitable for the access control of dynamic online environments. For the second issue, inspired by the object-bank scene descriptor, we proposed a 1000-object-bank (1000OBK) event descriptor. Feature vectors of the 1000OBK were extracted from response pyramids of 1000 generic object detectors which were trained on standard annotated image datasets, such as the ImageNet dataset. A spatial bag of words tiling approach was then adopted to encode these feature vectors for bridging the gap between the objects and events. Furthermore, we performed experiments in the context of event classification on the challenging TRECVID MED 2012 dataset, and the results showed that the robust 1000OBK event descriptor outperforms the state-of-the-art approaches. PMID:25147840

  10. Acoustic Event Detection and Classification

    NASA Astrophysics Data System (ADS)

    Temko, Andrey; Nadeu, Climent; Macho, Dušan; Malkin, Robert; Zieger, Christian; Omologo, Maurizio

    The human activity that takes place in meeting rooms or classrooms is reflected in a rich variety of acoustic events (AE), produced either by the human body or by objects handled by humans, so the determination of both the identity of sounds and their position in time may help to detect and describe that human activity. Indeed, speech is usually the most informative sound, but other kinds of AEs may also carry useful information, for example, clapping or laughing inside a speech, a strong yawn in the middle of a lecture, a chair moving or a door slam when the meeting has just started. Additionally, detection and classification of sounds other than speech may be useful to enhance the robustness of speech technologies like automatic speech recognition.

  11. A Hierarchical Convolutional Neural Network for vesicle fusion event classification.

    PubMed

    Li, Haohan; Mao, Yunxiang; Yin, Zhaozheng; Xu, Yingke

    2017-09-01

    Quantitative analysis of vesicle exocytosis and classification of different modes of vesicle fusion from the fluorescence microscopy are of primary importance for biomedical researches. In this paper, we propose a novel Hierarchical Convolutional Neural Network (HCNN) method to automatically identify vesicle fusion events in time-lapse Total Internal Reflection Fluorescence Microscopy (TIRFM) image sequences. Firstly, a detection and tracking method is developed to extract image patch sequences containing potential fusion events. Then, a Gaussian Mixture Model (GMM) is applied on each image patch of the patch sequence with outliers rejected for robust Gaussian fitting. By utilizing the high-level time-series intensity change features introduced by GMM and the visual appearance features embedded in some key moments of the fusion process, the proposed HCNN architecture is able to classify each candidate patch sequence into three classes: full fusion event, partial fusion event and non-fusion event. Finally, we validate the performance of our method on 9 challenging datasets that have been annotated by cell biologists, and our method achieves better performances when comparing with three previous methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. A Robust Real Time Direction-of-Arrival Estimation Method for Sequential Movement Events of Vehicles.

    PubMed

    Liu, Huawei; Li, Baoqing; Yuan, Xiaobing; Zhou, Qianwei; Huang, Jingchang

    2018-03-27

    Parameters estimation of sequential movement events of vehicles is facing the challenges of noise interferences and the demands of portable implementation. In this paper, we propose a robust direction-of-arrival (DOA) estimation method for the sequential movement events of vehicles based on a small Micro-Electro-Mechanical System (MEMS) microphone array system. Inspired by the incoherent signal-subspace method (ISM), the method that is proposed in this work employs multiple sub-bands, which are selected from the wideband signals with high magnitude-squared coherence to track moving vehicles in the presence of wind noise. The field test results demonstrate that the proposed method has a better performance in emulating the DOA of a moving vehicle even in the case of severe wind interference than the narrowband multiple signal classification (MUSIC) method, the sub-band DOA estimation method, and the classical two-sided correlation transformation (TCT) method.

  13. DARHT Multi-intelligence Seismic and Acoustic Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Garrison Nicole; Van Buren, Kendra Lu; Hemez, Francois M.

    The purpose of this report is to document the analysis of seismic and acoustic data collected at the Dual-Axis Radiographic Hydrodynamic Test (DARHT) facility at Los Alamos National Laboratory for robust, multi-intelligence decision making. The data utilized herein is obtained from two tri-axial seismic sensors and three acoustic sensors, resulting in a total of nine data channels. The goal of this analysis is to develop a generalized, automated framework to determine internal operations at DARHT using informative features extracted from measurements collected external of the facility. Our framework involves four components: (1) feature extraction, (2) data fusion, (3) classification, andmore » finally (4) robustness analysis. Two approaches are taken for extracting features from the data. The first of these, generic feature extraction, involves extraction of statistical features from the nine data channels. The second approach, event detection, identifies specific events relevant to traffic entering and leaving the facility as well as explosive activities at DARHT and nearby explosive testing sites. Event detection is completed using a two stage method, first utilizing signatures in the frequency domain to identify outliers and second extracting short duration events of interest among these outliers by evaluating residuals of an autoregressive exogenous time series model. Features extracted from each data set are then fused to perform analysis with a multi-intelligence paradigm, where information from multiple data sets are combined to generate more information than available through analysis of each independently. The fused feature set is used to train a statistical classifier and predict the state of operations to inform a decision maker. We demonstrate this classification using both generic statistical features and event detection and provide a comparison of the two methods. Finally, the concept of decision robustness is presented through a preliminary analysis where uncertainty is added to the system through noise in the measurements.« less

  14. Acoustic signature recognition technique for Human-Object Interactions (HOI) in persistent surveillance systems

    NASA Astrophysics Data System (ADS)

    Alkilani, Amjad; Shirkhodaie, Amir

    2013-05-01

    Handling, manipulation, and placement of objects, hereon called Human-Object Interaction (HOI), in the environment generate sounds. Such sounds are readily identifiable by the human hearing. However, in the presence of background environment noises, recognition of minute HOI sounds is challenging, though vital for improvement of multi-modality sensor data fusion in Persistent Surveillance Systems (PSS). Identification of HOI sound signatures can be used as precursors to detection of pertinent threats that otherwise other sensor modalities may miss to detect. In this paper, we present a robust method for detection and classification of HOI events via clustering of extracted features from training of HOI acoustic sound waves. In this approach, salient sound events are preliminary identified and segmented from background via a sound energy tracking method. Upon this segmentation, frequency spectral pattern of each sound event is modeled and its features are extracted to form a feature vector for training. To reduce dimensionality of training feature space, a Principal Component Analysis (PCA) technique is employed to expedite fast classification of test feature vectors, a kd-tree and Random Forest classifiers are trained for rapid classification of training sound waves. Each classifiers employs different similarity distance matching technique for classification. Performance evaluations of classifiers are compared for classification of a batch of training HOI acoustic signatures. Furthermore, to facilitate semantic annotation of acoustic sound events, a scheme based on Transducer Mockup Language (TML) is proposed. The results demonstrate the proposed approach is both reliable and effective, and can be extended to future PSS applications.

  15. Real-time distributed fiber optic sensor for security systems: Performance, event classification and nuisance mitigation

    NASA Astrophysics Data System (ADS)

    Mahmoud, Seedahmed S.; Visagathilagar, Yuvaraja; Katsifolis, Jim

    2012-09-01

    The success of any perimeter intrusion detection system depends on three important performance parameters: the probability of detection (POD), the nuisance alarm rate (NAR), and the false alarm rate (FAR). The most fundamental parameter, POD, is normally related to a number of factors such as the event of interest, the sensitivity of the sensor, the installation quality of the system, and the reliability of the sensing equipment. The suppression of nuisance alarms without degrading sensitivity in fiber optic intrusion detection systems is key to maintaining acceptable performance. Signal processing algorithms that maintain the POD and eliminate nuisance alarms are crucial for achieving this. In this paper, a robust event classification system using supervised neural networks together with a level crossings (LCs) based feature extraction algorithm is presented for the detection and recognition of intrusion and non-intrusion events in a fence-based fiber-optic intrusion detection system. A level crossings algorithm is also used with a dynamic threshold to suppress torrential rain-induced nuisance alarms in a fence system. Results show that rain-induced nuisance alarms can be suppressed for rainfall rates in excess of 100 mm/hr with the simultaneous detection of intrusion events. The use of a level crossing based detection and novel classification algorithm is also presented for a buried pipeline fiber optic intrusion detection system for the suppression of nuisance events and discrimination of intrusion events. The sensor employed for both types of systems is a distributed bidirectional fiber-optic Mach-Zehnder (MZ) interferometer.

  16. Modeling time-to-event (survival) data using classification tree analysis.

    PubMed

    Linden, Ariel; Yarnold, Paul R

    2017-12-01

    Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.

  17. Graph-based sensor fusion for classification of transient acoustic signals.

    PubMed

    Srinivas, Umamahesh; Nasrabadi, Nasser M; Monga, Vishal

    2015-03-01

    Advances in acoustic sensing have enabled the simultaneous acquisition of multiple measurements of the same physical event via co-located acoustic sensors. We exploit the inherent correlation among such multiple measurements for acoustic signal classification, to identify the launch/impact of munition (i.e., rockets, mortars). Specifically, we propose a probabilistic graphical model framework that can explicitly learn the class conditional correlations between the cepstral features extracted from these different measurements. Additionally, we employ symbolic dynamic filtering-based features, which offer improvements over the traditional cepstral features in terms of robustness to signal distortions. Experiments on real acoustic data sets show that our proposed algorithm outperforms conventional classifiers as well as the recently proposed joint sparsity models for multisensor acoustic classification. Additionally our proposed algorithm is less sensitive to insufficiency in training samples compared to competing approaches.

  18. Abnormal global and local event detection in compressive sensing domain

    NASA Astrophysics Data System (ADS)

    Wang, Tian; Qiao, Meina; Chen, Jie; Wang, Chuanyun; Zhang, Wenjia; Snoussi, Hichem

    2018-05-01

    Abnormal event detection, also known as anomaly detection, is one challenging task in security video surveillance. It is important to develop effective and robust movement representation models for global and local abnormal event detection to fight against factors such as occlusion and illumination change. In this paper, a new algorithm is proposed. It can locate the abnormal events on one frame, and detect the global abnormal frame. The proposed algorithm employs a sparse measurement matrix designed to represent the movement feature based on optical flow efficiently. Then, the abnormal detection mission is constructed as a one-class classification task via merely learning from the training normal samples. Experiments demonstrate that our algorithm performs well on the benchmark abnormal detection datasets against state-of-the-art methods.

  19. Video Segmentation Descriptors for Event Recognition

    DTIC Science & Technology

    2014-12-08

    Velastin, 3D Extended Histogram of Oriented Gradients (3DHOG) for Classification of Road Users in Urban Scenes , BMVC, 2009. [3] M.-Y. Chen and A. Hauptmann...computed on 3D volume outputted by the hierarchical segmentation . Each video is described as follows. Each supertube is temporally divided in n-frame...strength of these descriptors is their adaptability to the scene variations since they are grounded on a video segmentation . This makes them naturally robust

  20. Robust spike classification based on frequency domain neural waveform features.

    PubMed

    Yang, Chenhui; Yuan, Yuan; Si, Jennie

    2013-12-01

    We introduce a new spike classification algorithm based on frequency domain features of the spike snippets. The goal for the algorithm is to provide high classification accuracy, low false misclassification, ease of implementation, robustness to signal degradation, and objectivity in classification outcomes. In this paper, we propose a spike classification algorithm based on frequency domain features (CFDF). It makes use of frequency domain contents of the recorded neural waveforms for spike classification. The self-organizing map (SOM) is used as a tool to determine the cluster number intuitively and directly by viewing the SOM output map. After that, spike classification can be easily performed using clustering algorithms such as the k-Means. In conjunction with our previously developed multiscale correlation of wavelet coefficient (MCWC) spike detection algorithm, we show that the MCWC and CFDF detection and classification system is robust when tested on several sets of artificial and real neural waveforms. The CFDF is comparable to or outperforms some popular automatic spike classification algorithms with artificial and real neural data. The detection and classification of neural action potentials or neural spikes is an important step in single-unit-based neuroscientific studies and applications. After the detection of neural snippets potentially containing neural spikes, a robust classification algorithm is applied for the analysis of the snippets to (1) extract similar waveforms into one class for them to be considered coming from one unit, and to (2) remove noise snippets if they do not contain any features of an action potential. Usually, a snippet is a small 2 or 3 ms segment of the recorded waveform, and differences in neural action potentials can be subtle from one unit to another. Therefore, a robust, high performance classification system like the CFDF is necessary. In addition, the proposed algorithm does not require any assumptions on statistical properties of the noise and proves to be robust under noise contamination.

  1. What are the most fire-dangerous atmospheric circulations in the Eastern-Mediterranean? Analysis of the synoptic wildfire climatology.

    PubMed

    Paschalidou, A K; Kassomenos, P A

    2016-01-01

    Wildfire management is closely linked to robust forecasts of changes in wildfire risk related to meteorological conditions. This link can be bridged either through fire weather indices or through statistical techniques that directly relate atmospheric patterns to wildfire activity. In the present work the COST-733 classification schemes are applied in order to link wildfires in Greece with synoptic circulation patterns. The analysis reveals that the majority of wildfire events can be explained by a small number of specific synoptic circulations, hence reflecting the synoptic climatology of wildfires. All 8 classification schemes used, prove that the most fire-dangerous conditions in Greece are characterized by a combination of high atmospheric pressure systems located N to NW of Greece, coupled with lower pressures located over the very Eastern part of the Mediterranean, an atmospheric pressure pattern closely linked to the local Etesian winds over the Aegean Sea. During these events, the atmospheric pressure has been reported to be anomalously high, while anomalously low 500hPa geopotential heights and negative total water column anomalies were also observed. Among the various classification schemes used, the 2 Principal Component Analysis-based classifications, namely the PCT and the PXE, as well as the Leader Algorithm classification LND proved to be the best options, in terms of being capable to isolate the vast amount of fire events in a small number of classes with increased frequency of occurrence. It is estimated that these 3 schemes, in combination with medium-range to seasonal climate forecasts, could be used by wildfire risk managers to provide increased wildfire prediction accuracy. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Analysis tools for discovering strong parity violation at hadron colliders

    NASA Astrophysics Data System (ADS)

    Backović, Mihailo; Ralston, John P.

    2011-07-01

    Several arguments suggest parity violation may be observable in high energy strong interactions. We introduce new analysis tools to describe the azimuthal dependence of multiparticle distributions, or “azimuthal flow.” Analysis uses the representations of the orthogonal group O(2) and dihedral groups DN necessary to define parity completely in two dimensions. Classification finds that collective angles used in event-by-event statistics represent inequivalent tensor observables that cannot generally be represented by a single “reaction plane.” Many new parity-violating observables exist that have never been measured, while many parity-conserving observables formerly lumped together are now distinguished. We use the concept of “event-shape sorting” to suggest separating right- and left-handed events, and we discuss the effects of transverse and longitudinal spin. The analysis tools are statistically robust, and can be applied equally to low or high multiplicity events at the Tevatron, RHIC or RHIC Spin, and the LHC.

  3. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System.

    PubMed

    de Moura, Karina de O A; Balbinot, Alexandre

    2018-05-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior.

  4. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System

    PubMed Central

    Balbinot, Alexandre

    2018-01-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior. PMID:29723994

  5. CNN universal machine as classificaton platform: an art-like clustering algorithm.

    PubMed

    Bálya, David

    2003-12-01

    Fast and robust classification of feature vectors is a crucial task in a number of real-time systems. A cellular neural/nonlinear network universal machine (CNN-UM) can be very efficient as a feature detector. The next step is to post-process the results for object recognition. This paper shows how a robust classification scheme based on adaptive resonance theory (ART) can be mapped to the CNN-UM. Moreover, this mapping is general enough to include different types of feed-forward neural networks. The designed analogic CNN algorithm is capable of classifying the extracted feature vectors keeping the advantages of the ART networks, such as robust, plastic and fault-tolerant behaviors. An analogic algorithm is presented for unsupervised classification with tunable sensitivity and automatic new class creation. The algorithm is extended for supervised classification. The presented binary feature vector classification is implemented on the existing standard CNN-UM chips for fast classification. The experimental evaluation shows promising performance after 100% accuracy on the training set.

  6. Identification and interpretation of patterns in rocket engine data: Artificial intelligence and neural network approaches

    NASA Technical Reports Server (NTRS)

    Ali, Moonis; Whitehead, Bruce; Gupta, Uday K.; Ferber, Harry

    1989-01-01

    This paper describes an expert system which is designed to perform automatic data analysis, identify anomalous events, and determine the characteristic features of these events. We have employed both artificial intelligence and neural net approaches in the design of this expert system. The artificial intelligence approach is useful because it provides (1) the use of human experts' knowledge of sensor behavior and faulty engine conditions in interpreting data; (2) the use of engine design knowledge and physical sensor locations in establishing relationships among the events of multiple sensors; (3) the use of stored analysis of past data of faulty engine conditions; and (4) the use of knowledge-based reasoning in distinguishing sensor failure from actual faults. The neural network approach appears promising because neural nets (1) can be trained on extremely noisy data and produce classifications which are more robust under noisy conditions than other classification techniques; (2) avoid the necessity of noise removal by digital filtering and therefore avoid the need to make assumptions about frequency bands or other signal characteristics of anomalous behavior; (3) can, in effect, generate their own feature detectors based on the characteristics of the sensor data used in training; and (4) are inherently parallel and therefore are potentially implementable in special-purpose parallel hardware.

  7. Fuzzy support vector machine: an efficient rule-based classification technique for microarrays.

    PubMed

    Hajiloo, Mohsen; Rabiee, Hamid R; Anooshahpour, Mahdi

    2013-01-01

    The abundance of gene expression microarray data has led to the development of machine learning algorithms applicable for tackling disease diagnosis, disease prognosis, and treatment selection problems. However, these algorithms often produce classifiers with weaknesses in terms of accuracy, robustness, and interpretability. This paper introduces fuzzy support vector machine which is a learning algorithm based on combination of fuzzy classifiers and kernel machines for microarray classification. Experimental results on public leukemia, prostate, and colon cancer datasets show that fuzzy support vector machine applied in combination with filter or wrapper feature selection methods develops a robust model with higher accuracy than the conventional microarray classification models such as support vector machine, artificial neural network, decision trees, k nearest neighbors, and diagonal linear discriminant analysis. Furthermore, the interpretable rule-base inferred from fuzzy support vector machine helps extracting biological knowledge from microarray data. Fuzzy support vector machine as a new classification model with high generalization power, robustness, and good interpretability seems to be a promising tool for gene expression microarray classification.

  8. Classification of Partial Discharge Signals by Combining Adaptive Local Iterative Filtering and Entropy Features

    PubMed Central

    Morison, Gordon; Boreham, Philip

    2018-01-01

    Electromagnetic Interference (EMI) is a technique for capturing Partial Discharge (PD) signals in High-Voltage (HV) power plant apparatus. EMI signals can be non-stationary which makes their analysis difficult, particularly for pattern recognition applications. This paper elaborates upon a previously developed software condition-monitoring model for improved EMI events classification based on time-frequency signal decomposition and entropy features. The idea of the proposed method is to map multiple discharge source signals captured by EMI and labelled by experts, including PD, from the time domain to a feature space, which aids in the interpretation of subsequent fault information. Here, instead of using only one permutation entropy measure, a more robust measure, called Dispersion Entropy (DE), is added to the feature vector. Multi-Class Support Vector Machine (MCSVM) methods are utilized for classification of the different discharge sources. Results show an improved classification accuracy compared to previously proposed methods. This yields to a successful development of an expert’s knowledge-based intelligent system. Since this method is demonstrated to be successful with real field data, it brings the benefit of possible real-world application for EMI condition monitoring. PMID:29385030

  9. Event-based analysis of free-living behaviour.

    PubMed

    Granat, Malcolm H

    2012-11-01

    The quantification of free-living physical activities is important in understanding how physical activity and sedentary behaviour impact on health and also on how interventions might modify free-living behaviour to enhance health. Quantification, and the terminology used, has in many ways been determined by the choice of measurement technique. The inter-related issues around measurement devices and terminology used are explored. This paper proposes a terminology and a systematic approach for the analysis of free-living activity information using event-based activity data. The event-based approach uses a flexible hierarchical classification of events and, dependent on the research question, analysis can then be undertaken on a selection of these events. The quantification of free-living behaviour is therefore the result of the analysis on the patterns of these chosen events. The application of this approach is illustrated with results from a range of published studies by our group showing how event-based analysis provides a flexible yet robust method of addressing the research question(s) and provides a deeper insight into free-living behaviour. It is proposed that it is through event-based analysis we can more clearly understand how behaviour is related to health and also how we can produce more relevant outcome measures.

  10. Making historic loss data comparable over time and place

    NASA Astrophysics Data System (ADS)

    Eichner, Jan; Steuer, Markus; Löw, Petra

    2017-04-01

    When utilizing historic loss data for present day risk assessment, it is necessary to make the data comparable over time and place. To achieve this, the assessment of costs from natural hazard events requires consistent and homogeneous methodologies for loss estimation as well as a robust treatment of loss data to estimate and/or reduce distorting effects due to a temporal bias in the reporting of small-scale loss events. Here we introduce Munich Re's NatCatSERVICE loss database and present a novel methodology of peril-specific normalization of the historic losses (to account for socio-economic growth of assets over time), and we introduce a metric of severity classification (called CatClass) that allows for a global comparison of impact severity across countries of different stages of economic development.

  11. A Robust Geometric Model for Argument Classification

    NASA Astrophysics Data System (ADS)

    Giannone, Cristina; Croce, Danilo; Basili, Roberto; de Cao, Diego

    Argument classification is the task of assigning semantic roles to syntactic structures in natural language sentences. Supervised learning techniques for frame semantics have been recently shown to benefit from rich sets of syntactic features. However argument classification is also highly dependent on the semantics of the involved lexicals. Empirical studies have shown that domain dependence of lexical information causes large performance drops in outside domain tests. In this paper a distributional approach is proposed to improve the robustness of the learning model against out-of-domain lexical phenomena.

  12. Robust tissue classification for reproducible wound assessment in telemedicine environments

    NASA Astrophysics Data System (ADS)

    Wannous, Hazem; Treuillet, Sylvie; Lucas, Yves

    2010-04-01

    In telemedicine environments, a standardized and reproducible assessment of wounds, using a simple free-handled digital camera, is an essential requirement. However, to ensure robust tissue classification, particular attention must be paid to the complete design of the color processing chain. We introduce the key steps including color correction, merging of expert labeling, and segmentation-driven classification based on support vector machines. The tool thus developed ensures stability under lighting condition, viewpoint, and camera changes, to achieve accurate and robust classification of skin tissues. Clinical tests demonstrate that such an advanced tool, which forms part of a complete 3-D and color wound assessment system, significantly improves the monitoring of the healing process. It achieves an overlap score of 79.3 against 69.1% for a single expert, after mapping on the medical reference developed from the image labeling by a college of experts.

  13. Robust multi-site MR data processing: iterative optimization of bias correction, tissue classification, and registration.

    PubMed

    Young Kim, Eun; Johnson, Hans J

    2013-01-01

    A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.

  14. The application of compressed sensing to long-term acoustic emission-based structural health monitoring

    NASA Astrophysics Data System (ADS)

    Cattaneo, Alessandro; Park, Gyuhae; Farrar, Charles; Mascareñas, David

    2012-04-01

    The acoustic emission (AE) phenomena generated by a rapid release in the internal stress of a material represent a promising technique for structural health monitoring (SHM) applications. AE events typically result in a discrete number of short-time, transient signals. The challenge associated with capturing these events using classical techniques is that very high sampling rates must be used over extended periods of time. The result is that a very large amount of data is collected to capture a phenomenon that rarely occurs. Furthermore, the high energy consumption associated with the required high sampling rates makes the implementation of high-endurance, low-power, embedded AE sensor nodes difficult to achieve. The relatively rare occurrence of AE events over long time scales implies that these measurements are inherently sparse in the spike domain. The sparse nature of AE measurements makes them an attractive candidate for the application of compressed sampling techniques. Collecting compressed measurements of sparse AE signals will relax the requirements on the sampling rate and memory demands. The focus of this work is to investigate the suitability of compressed sensing techniques for AE-based SHM. The work explores estimating AE signal statistics in the compressed domain for low-power classification applications. In the event compressed classification finds an event of interest, ι1 norm minimization will be used to reconstruct the measurement for further analysis. The impact of structured noise on compressive measurements is specifically addressed. The suitability of a particular algorithm, called Justice Pursuit, to increase robustness to a small amount of arbitrary measurement corruption is investigated.

  15. Design and development of a prototype platform for gait analysis

    NASA Astrophysics Data System (ADS)

    Diffenbaugh, T. E.; Marti, M. A.; Jagani, J.; Garcia, V.; Iliff, G. J.; Phoenix, A.; Woolard, A. G.; Malladi, V. V. N. S.; Bales, D. B.; Tarazaga, P. A.

    2017-04-01

    The field of event classification and localization in building environments using accelerometers has grown significantly due to its implications for energy, security, and emergency protocols. Virginia Tech's Goodwin Hall (VT-GH) provides a robust testbed for such work, but a reduced scale testbed could provide significant benefits by allowing algorithm development to occur in a simplified environment. Environments such as VT-GH have high human traffic that contributes external noise disrupting test signals. This paper presents a design solution through the development of an isolated platform for data collection, portable demonstrations, and the development of localization and classification algorithms. The platform's success was quantified by the resulting transmissibility of external excitation sources, demonstrating the capabilities of the platform to isolate external disturbances while preserving gait information. This platform demonstrates the collection of high-quality gait information in otherwise noisy environments for data collection or demonstration purposes.

  16. Classification of Exacerbation Frequency in the COPDGene Cohort Using Deep Learning with Deep Belief Networks.

    PubMed

    Ying, Jun; Dutta, Joyita; Guo, Ning; Hu, Chenhui; Zhou, Dan; Sitek, Arkadiusz; Li, Quanzheng

    2016-12-21

    This study aims to develop an automatic classifier based on deep learning for exacerbation frequency in patients with chronic obstructive pulmonary disease (COPD). A threelayer deep belief network (DBN) with two hidden layers and one visible layer was employed to develop classification models and the models' robustness to exacerbation was analyzed. Subjects from the COPDGene cohort were labeled with exacerbation frequency, defined as the number of exacerbation events per year. 10,300 subjects with 361 features each were included in the analysis. After feature selection and parameter optimization, the proposed classification method achieved an accuracy of 91.99%, using a 10-fold cross validation experiment. The analysis of DBN weights showed that there was a good visual spatial relationship between the underlying critical features of different layers. Our findings show that the most sensitive features obtained from the DBN weights are consistent with the consensus showed by clinical rules and standards for COPD diagnostics. We thus demonstrate that DBN is a competitive tool for exacerbation risk assessment for patients suffering from COPD.

  17. Artillery/mortar type classification based on detected acoustic transients

    NASA Astrophysics Data System (ADS)

    Morcos, Amir; Grasing, David; Desai, Sachi

    2008-04-01

    Feature extraction methods based on the statistical analysis of the change in event pressure levels over a period and the level of ambient pressure excitation facilitate the development of a robust classification algorithm. The features reliably discriminates mortar and artillery variants via acoustic signals produced during the launch events. Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the identification of mortar and artillery variants as type A, etcetera through analysis of the waveform. Distinct characteristics arise within the different mortar/artillery variants because varying HE mortar payloads and related charges emphasize varying size events at launch. The waveform holds various harmonic properties distinct to a given mortar/artillery variant that through advanced signal processing and data mining techniques can employed to classify a given type. The skewness and other statistical processing techniques are used to extract the predominant components from the acoustic signatures at ranges exceeding 3000m. Exploiting these techniques will help develop a feature set highly independent of range, providing discrimination based on acoustic elements of the blast wave. Highly reliable discrimination will be achieved with a feed-forward neural network classifier trained on a feature space derived from the distribution of statistical coefficients, frequency spectrum, and higher frequency details found within different energy bands. The processes that are described herein extend current technologies, which emphasis acoustic sensor systems to provide such situational awareness.

  18. Artillery/mortar round type classification to increase system situational awareness

    NASA Astrophysics Data System (ADS)

    Desai, Sachi; Grasing, David; Morcos, Amir; Hohil, Myron

    2008-04-01

    Feature extraction methods based on the statistical analysis of the change in event pressure levels over a period and the level of ambient pressure excitation facilitate the development of a robust classification algorithm. The features reliably discriminates mortar and artillery variants via acoustic signals produced during the launch events. Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the identification of mortar and artillery variants as type A, etcetera through analysis of the waveform. Distinct characteristics arise within the different mortar/artillery variants because varying HE mortar payloads and related charges emphasize varying size events at launch. The waveform holds various harmonic properties distinct to a given mortar/artillery variant that through advanced signal processing and data mining techniques can employed to classify a given type. The skewness and other statistical processing techniques are used to extract the predominant components from the acoustic signatures at ranges exceeding 3000m. Exploiting these techniques will help develop a feature set highly independent of range, providing discrimination based on acoustic elements of the blast wave. Highly reliable discrimination will be achieved with a feedforward neural network classifier trained on a feature space derived from the distribution of statistical coefficients, frequency spectrum, and higher frequency details found within different energy bands. The processes that are described herein extend current technologies, which emphasis acoustic sensor systems to provide such situational awareness.

  19. Mortar and artillery variants classification by exploiting characteristics of the acoustic signature

    NASA Astrophysics Data System (ADS)

    Hohil, Myron E.; Grasing, David; Desai, Sachi; Morcos, Amir

    2007-10-01

    Feature extraction methods based on the discrete wavelet transform and multiresolution analysis facilitate the development of a robust classification algorithm that reliably discriminates mortar and artillery variants via acoustic signals produced during the launch/impact events. Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the identification of mortar and artillery variants. Distinct characteristics arise within the different mortar variants because varying HE mortar payloads and related charges emphasize concussive and shrapnel effects upon impact employing varying magnitude explosions. The different mortar variants are characterized by variations in the resulting waveform of the event. The waveform holds various harmonic properties distinct to a given mortar/artillery variant that through advanced signal processing techniques can employed to classify a given set. The DWT and other readily available signal processing techniques will be used to extract the predominant components of these characteristics from the acoustic signatures at ranges exceeding 2km. Exploiting these techniques will help develop a feature set highly independent of range, providing discrimination based on acoustic elements of the blast wave. Highly reliable discrimination will be achieved with a feed-forward neural network classifier trained on a feature space derived from the distribution of wavelet coefficients, frequency spectrum, and higher frequency details found within different levels of the multiresolution decomposition. The process that will be described herein extends current technologies, which emphasis multi modal sensor fusion suites to provide such situational awareness. A two fold problem of energy consumption and line of sight arise with the multi modal sensor suites. The process described within will exploit the acoustic properties of the event to provide variant classification as added situational awareness to the solider.

  20. Development and Assessment of Memorial Sloan Kettering Cancer Center’s Surgical Secondary Events Grading System

    PubMed Central

    Strong, Vivian E.; Selby, Luke V.; Sovel, Mindy; Disa, Joseph J.; Hoskins, William; DeMatteo, Ronald; Scardino, Peter; Jaques, David P.

    2015-01-01

    Background Studying surgical secondary events is an evolving effort with no current established system for database design, standard reporting, or definitions. Using the Clavien-Dindo classification as a guide, in 2001 we developed a Surgical Secondary Events database based on grade of event and required intervention to begin prospectively recording and analyzing all surgical secondary events (SSE). Study Design Events are prospectively entered into the database by attending surgeons, house staff, and research staff. In 2008 we performed a blinded external audit of 1,498 operations that were randomly selected to examine the quality and reliability of the data. Results 1,498 of 4,284 operations during the 3rd quarter of 2008 were audited. 79% (N=1,180) of the operations did not have a secondary event while 21% (N=318) of operations had an identified event. 91% (1,365) of operations were correctly entered into the SSE database. 97% (129/133) of missed secondary events were Grades I and II. Three Grade III (2%) and one Grade IV (1%) secondary event were missed. There were no missed Grade 5 secondary events. Conclusion Grade III – IV events are more accurately collected than Grade I – II events. Robust and accurate secondary events data can be collected by clinicians and research staff and these data can safely be used for quality improvement projects and research. PMID:25319579

  1. Development and assessment of Memorial Sloan Kettering Cancer Center's Surgical Secondary Events grading system.

    PubMed

    Strong, Vivian E; Selby, Luke V; Sovel, Mindy; Disa, Joseph J; Hoskins, William; Dematteo, Ronald; Scardino, Peter; Jaques, David P

    2015-04-01

    Studying surgical secondary events is an evolving effort with no current established system for database design, standard reporting, or definitions. Using the Clavien-Dindo classification as a guide, in 2001 we developed a Surgical Secondary Events database based on grade of event and required intervention to begin prospectively recording and analyzing all surgical secondary events (SSE). Events are prospectively entered into the database by attending surgeons, house staff, and research staff. In 2008 we performed a blinded external audit of 1,498 operations that were randomly selected to examine the quality and reliability of the data. Of 4,284 operations, 1,498 were audited during the third quarter of 2008. Of these operations, 79 % (N = 1,180) did not have a secondary event while 21 % (N = 318) had an identified event; 91 % of operations (1,365) were correctly entered into the SSE database. Also 97 % (129 of 133) of missed secondary events were grades I and II. There were 3 grade III (2 %) and 1 grade IV (1 %) secondary event that were missed. There were no missed grade 5 secondary events. Grade III-IV events are more accurately collected than grade I-II events. Robust and accurate secondary events data can be collected by clinicians and research staff, and these data can safely be used for quality improvement projects and research.

  2. Automatic speech recognition using a predictive echo state network classifier.

    PubMed

    Skowronski, Mark D; Harris, John G

    2007-04-01

    We have combined an echo state network (ESN) with a competitive state machine framework to create a classification engine called the predictive ESN classifier. We derive the expressions for training the predictive ESN classifier and show that the model was significantly more noise robust compared to a hidden Markov model in noisy speech classification experiments by 8+/-1 dB signal-to-noise ratio. The simple training algorithm and noise robustness of the predictive ESN classifier make it an attractive classification engine for automatic speech recognition.

  3. Molecular cancer classification using a meta-sample-based regularized robust coding method.

    PubMed

    Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen

    2014-01-01

    Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.

  4. Pathological Bases for a Robust Application of Cancer Molecular Classification

    PubMed Central

    Diaz-Cano, Salvador J.

    2015-01-01

    Any robust classification system depends on its purpose and must refer to accepted standards, its strength relying on predictive values and a careful consideration of known factors that can affect its reliability. In this context, a molecular classification of human cancer must refer to the current gold standard (histological classification) and try to improve it with key prognosticators for metastatic potential, staging and grading. Although organ-specific examples have been published based on proteomics, transcriptomics and genomics evaluations, the most popular approach uses gene expression analysis as a direct correlate of cellular differentiation, which represents the key feature of the histological classification. RNA is a labile molecule that varies significantly according with the preservation protocol, its transcription reflect the adaptation of the tumor cells to the microenvironment, it can be passed through mechanisms of intercellular transference of genetic information (exosomes), and it is exposed to epigenetic modifications. More robust classifications should be based on stable molecules, at the genetic level represented by DNA to improve reliability, and its analysis must deal with the concept of intratumoral heterogeneity, which is at the origin of tumor progression and is the byproduct of the selection process during the clonal expansion and progression of neoplasms. The simultaneous analysis of multiple DNA targets and next generation sequencing offer the best practical approach for an analytical genomic classification of tumors. PMID:25898411

  5. Median Robust Extended Local Binary Pattern for Texture Classification.

    PubMed

    Liu, Li; Lao, Songyang; Fieguth, Paul W; Guo, Yulan; Wang, Xiaogang; Pietikäinen, Matti

    2016-03-01

    Local binary patterns (LBP) are considered among the most computationally efficient high-performance texture features. However, the LBP method is very sensitive to image noise and is unable to capture macrostructure information. To best address these disadvantages, in this paper, we introduce a novel descriptor for texture classification, the median robust extended LBP (MRELBP). Different from the traditional LBP and many LBP variants, MRELBP compares regional image medians rather than raw image intensities. A multiscale LBP type descriptor is computed by efficiently comparing image medians over a novel sampling scheme, which can capture both microstructure and macrostructure texture information. A comprehensive evaluation on benchmark data sets reveals MRELBP's high performance-robust to gray scale variations, rotation changes and noise-but at a low computational cost. MRELBP produces the best classification scores of 99.82%, 99.38%, and 99.77% on three popular Outex test suites. More importantly, MRELBP is shown to be highly robust to image noise, including Gaussian noise, Gaussian blur, salt-and-pepper noise, and random pixel corruption.

  6. An efficient robust sound classification algorithm for hearing aids.

    PubMed

    Nordqvist, Peter; Leijon, Arne

    2004-06-01

    An efficient robust sound classification algorithm based on hidden Markov models is presented. The system would enable a hearing aid to automatically change its behavior for differing listening environments according to the user's preferences. This work attempts to distinguish between three listening environment categories: speech in traffic noise, speech in babble, and clean speech, regardless of the signal-to-noise ratio. The classifier uses only the modulation characteristics of the signal. The classifier ignores the absolute sound pressure level and the absolute spectrum shape, resulting in an algorithm that is robust against irrelevant acoustic variations. The measured classification hit rate was 96.7%-99.5% when the classifier was tested with sounds representing one of the three environment categories included in the classifier. False-alarm rates were 0.2%-1.7% in these tests. The algorithm is robust and efficient and consumes a small amount of instructions and memory. It is fully possible to implement the classifier in a DSP-based hearing instrument.

  7. PSGMiner: A modular software for polysomnographic analysis.

    PubMed

    Umut, İlhan

    2016-06-01

    Sleep disorders affect a great percentage of the population. The diagnosis of these disorders is usually made by polysomnography. This paper details the development of new software to carry out feature extraction in order to perform robust analysis and classification of sleep events using polysomnographic data. The software, called PSGMiner, is a tool, which visualizes, processes and classifies bioelectrical data. The purpose of this program is to provide researchers with a platform with which to test new hypotheses by creating tests to check for correlations that are not available in commercially available software. The software is freely available under the GPL3 License. PSGMiner is composed of a number of diverse modules such as feature extraction, annotation, and machine learning modules, all of which are accessible from the main module. Using the software, it is possible to extract features of polysomnography using digital signal processing and statistical methods and to perform different analyses. The features can be classified through the use of five classification algorithms. PSGMiner offers an architecture designed for integrating new methods. Automatic scoring, which is available in almost all commercial PSG software, is not inherently available in this program, though it can be implemented by two different methodologies (machine learning and algorithms). While similar software focuses on a certain signal or event composed of a small number of modules with no expansion possibility, the software introduced here can handle all polysomnographic signals and events. The software simplifies the processing of polysomnographic signals for researchers and physicians that are not experts in computer programming. It can find correlations between different events which could help predict an oncoming event such as sleep apnea. The software could also be used for educational purposes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Vertically Integrated Seismological Analysis I : Modeling

    NASA Astrophysics Data System (ADS)

    Russell, S.; Arora, N. S.; Jordan, M. I.; Sudderth, E.

    2009-12-01

    As part of its CTBT verification efforts, the International Data Centre (IDC) analyzes seismic and other signals collected from hundreds of stations around the world. Current processing at the IDC proceeds in a series of pipelined stages. From station processing to network processing, each decision is made on the basis of local information. This has the advantage of efficiency, and simplifies the structure of software implementations. However, this approach may reduce accuracy in the detection and phase classification of arrivals, association of detections to hypothesized events, and localization of small-magnitude events.In our work, we approach such detection and association problems as ones of probabilistic inference. In simple terms, let X be a random variable ranging over all possible collections of events, with each event defined by time, location, magnitude, and type (natural or man-made). Let Y range over all possible waveform signal recordings at all detection stations. Then Pθ(X) describes a parameterized generative prior over events, and P[|#30#|]φ(Y | X) describes how the signal is propagated and measured (including travel time, selective absorption and scattering, noise, artifacts, sensor bias, sensor failures, etc.). Given observed recordings Y = y, we are interested in the posterior P(X | Y = y), and perhaps in the value of X that maximizes it—i.e., the most likely explanation for all the sensor readings. As detailed below, an additional focus of our work is to robustly learn appropriate model parameters θ and φ from historical data. The primary advantage we expect is that decisions about arrivals, phase classifications, and associations are made with the benefit of all available evidence, not just the local signal or predefined recipes. Important phenomena—such as the successful detection of sub-threshold signals, correction of phase classifications using arrival information at other stations, and removal of false events based on the absence of signals—should all fall out of our probabilistic framework without the need for special processing rules. In our baseline model, natural events occur according to a spatially inhomogeneous Poisson process. Complex events (swarms and aftershocks) may then be captured via temporally inhomogeneous extensions. Man-made events have a uniform probability of occurring anywhere on the earth, with a tendency to occur closer to the surface. Phases are modelled via their amplitude, frequency distribution, and origin. In the simplest case, transmission times are characterized via the one-dimensional IASPEI-91 model, accounting for model errors with Gaussian uncertainty. Such homogeneous, approximate physical models can be further refined via historical data and previously developed corrections. Signal measurements are captured by station-specific models, based on sensor types and geometries, local frequency absorption characteristics, and time-varying noise models. At the conference, we expect to be able to quantitatively demonstrate the advantages of our approach, at least for simulated data. When reporting their findings, such systems can easily flag low-confidence events, unexplained arrivals, and ambiguous classifications to focus the efforts of expert analysts.

  9. Detection of epileptic seizure in EEG signals using linear least squares preprocessing.

    PubMed

    Roshan Zamir, Z

    2016-09-01

    An epileptic seizure is a transient event of abnormal excessive neuronal discharge in the brain. This unwanted event can be obstructed by detection of electrical changes in the brain that happen before the seizure takes place. The automatic detection of seizures is necessary since the visual screening of EEG recordings is a time consuming task and requires experts to improve the diagnosis. Much of the prior research in detection of seizures has been developed based on artificial neural network, genetic programming, and wavelet transforms. Although the highest achieved accuracy for classification is 100%, there are drawbacks, such as the existence of unbalanced datasets and the lack of investigations in performances consistency. To address these, four linear least squares-based preprocessing models are proposed to extract key features of an EEG signal in order to detect seizures. The first two models are newly developed. The original signal (EEG) is approximated by a sinusoidal curve. Its amplitude is formed by a polynomial function and compared with the predeveloped spline function. Different statistical measures, namely classification accuracy, true positive and negative rates, false positive and negative rates and precision, are utilised to assess the performance of the proposed models. These metrics are derived from confusion matrices obtained from classifiers. Different classifiers are used over the original dataset and the set of extracted features. The proposed models significantly reduce the dimension of the classification problem and the computational time while the classification accuracy is improved in most cases. The first and third models are promising feature extraction methods with the classification accuracy of 100%. Logistic, LazyIB1, LazyIB5, and J48 are the best classifiers. Their true positive and negative rates are 1 while false positive and negative rates are 0 and the corresponding precision values are 1. Numerical results suggest that these models are robust and efficient for detecting epileptic seizure. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Hybrid approach for robust diagnostics of cutting tools

    NASA Astrophysics Data System (ADS)

    Ramamurthi, K.; Hough, C. L., Jr.

    1994-03-01

    A new multisensor based hybrid technique has been developed for robust diagnosis of cutting tools. The technique combines the concepts of pattern classification and real-time knowledge based systems (RTKBS) and draws upon their strengths; learning facility in the case of pattern classification and a higher level of reasoning in the case of RTKBS. It eliminates some of their major drawbacks: false alarms or delayed/lack of diagnosis in case of pattern classification and tedious knowledge base generation in case of RTKBS. It utilizes a dynamic distance classifier, developed upon a new separability criterion and a new definition of robust diagnosis for achieving these benefits. The promise of this technique has been proven concretely through an on-line diagnosis of drill wear. Its suitability for practical implementation is substantiated by the use of practical, inexpensive, machine-mounted sensors and low-cost delivery systems.

  11. Fast and Robust Segmentation and Classification for Change Detection in Urban Point Clouds

    NASA Astrophysics Data System (ADS)

    Roynard, X.; Deschaud, J.-E.; Goulette, F.

    2016-06-01

    Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.

  12. Detecting event-related changes in organizational networks using optimized neural network models.

    PubMed

    Li, Ze; Sun, Duoyong; Zhu, Renqi; Lin, Zihan

    2017-01-01

    Organizational external behavior changes are caused by the internal structure and interactions. External behaviors are also known as the behavioral events of an organization. Detecting event-related changes in organizational networks could efficiently be used to monitor the dynamics of organizational behaviors. Although many different methods have been used to detect changes in organizational networks, these methods usually ignore the correlation between the internal structure and external events. Event-related change detection considers the correlation and could be used for event recognition based on social network modeling and supervised classification. Detecting event-related changes could be effectively useful in providing early warnings and faster responses to both positive and negative organizational activities. In this study, event-related change in an organizational network was defined, and artificial neural network models were used to quantitatively determine whether and when a change occurred. To achieve a higher accuracy, Back Propagation Neural Networks (BPNNs) were optimized using Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). We showed the feasibility of the proposed method by comparing its performance with that of other methods using two cases. The results suggested that the proposed method could identify organizational events based on a correlation between the organizational networks and events. The results also suggested that the proposed method not only has a higher precision but also has a better robustness than the previously used techniques.

  13. Detecting event-related changes in organizational networks using optimized neural network models

    PubMed Central

    Sun, Duoyong; Zhu, Renqi; Lin, Zihan

    2017-01-01

    Organizational external behavior changes are caused by the internal structure and interactions. External behaviors are also known as the behavioral events of an organization. Detecting event-related changes in organizational networks could efficiently be used to monitor the dynamics of organizational behaviors. Although many different methods have been used to detect changes in organizational networks, these methods usually ignore the correlation between the internal structure and external events. Event-related change detection considers the correlation and could be used for event recognition based on social network modeling and supervised classification. Detecting event-related changes could be effectively useful in providing early warnings and faster responses to both positive and negative organizational activities. In this study, event-related change in an organizational network was defined, and artificial neural network models were used to quantitatively determine whether and when a change occurred. To achieve a higher accuracy, Back Propagation Neural Networks (BPNNs) were optimized using Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). We showed the feasibility of the proposed method by comparing its performance with that of other methods using two cases. The results suggested that the proposed method could identify organizational events based on a correlation between the organizational networks and events. The results also suggested that the proposed method not only has a higher precision but also has a better robustness than the previously used techniques. PMID:29190799

  14. 77 FR 37879 - Cooperative Patent Classification External User Day

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-25

    ... Classification External User Day AGENCY: United States Patent and Trademark Office, Commerce. ACTION: Notice... Classification (CPC) External User Day event at its Alexandria Campus. CPC is a partnership between the USPTO and... classification system that will incorporate the best classification practices of the two Offices. This CPC event...

  15. A statistical framework for evaluating neural networks to predict recurrent events in breast cancer

    NASA Astrophysics Data System (ADS)

    Gorunescu, Florin; Gorunescu, Marina; El-Darzi, Elia; Gorunescu, Smaranda

    2010-07-01

    Breast cancer is the second leading cause of cancer deaths in women today. Sometimes, breast cancer can return after primary treatment. A medical diagnosis of recurrent cancer is often a more challenging task than the initial one. In this paper, we investigate the potential contribution of neural networks (NNs) to support health professionals in diagnosing such events. The NN algorithms are tested and applied to two different datasets. An extensive statistical analysis has been performed to verify our experiments. The results show that a simple network structure for both the multi-layer perceptron and radial basis function can produce equally good results, not all attributes are needed to train these algorithms and, finally, the classification performances of all algorithms are statistically robust. Moreover, we have shown that the best performing algorithm will strongly depend on the features of the datasets, and hence, there is not necessarily a single best classifier.

  16. Semi-supervised anomaly detection - towards model-independent searches of new physics

    NASA Astrophysics Data System (ADS)

    Kuusela, Mikael; Vatanen, Tommi; Malmi, Eric; Raiko, Tapani; Aaltonen, Timo; Nagai, Yoshikazu

    2012-06-01

    Most classification algorithms used in high energy physics fall under the category of supervised machine learning. Such methods require a training set containing both signal and background events and are prone to classification errors should this training data be systematically inaccurate for example due to the assumed MC model. To complement such model-dependent searches, we propose an algorithm based on semi-supervised anomaly detection techniques, which does not require a MC training sample for the signal data. We first model the background using a multivariate Gaussian mixture model. We then search for deviations from this model by fitting to the observations a mixture of the background model and a number of additional Gaussians. This allows us to perform pattern recognition of any anomalous excess over the background. We show by a comparison to neural network classifiers that such an approach is a lot more robust against misspecification of the signal MC than supervised classification. In cases where there is an unexpected signal, a neural network might fail to correctly identify it, while anomaly detection does not suffer from such a limitation. On the other hand, when there are no systematic errors in the training data, both methods perform comparably.

  17. Classification of robust heteroclinic cycles for vector fields in {\\protect\\bb R}^3 with symmetry

    NASA Astrophysics Data System (ADS)

    Hawker, David; Ashwin, Peter

    2005-09-01

    We consider a classification of robust heteroclinic cycles in the positive octant of {\\bb R}^3 under the action of the symmetry group {{\\bb Z}_2}^3 . We introduce a coding system to represent different classes up to a topological equivalence, and produce a characterization of all types of robust heteroclinic cycle that can arise in this situation. These cycles may or may not contain the origin within the cycle. We proceed to find a connection between our problem and meandric numbers. We find a direct correlation between the number of classes of robust heteroclinic cycle that do not include the origin and the 'Mercedes-Benz' sequence of integers characterizing meanders through a 'Y-shaped' configuration. We investigate upper and lower bounds for the number of classes possible for robust cycles between n equilibria, one of which may be the origin.

  18. Robust BMPM training based on second-order cone programming and its application in medical diagnosis.

    PubMed

    Peng, Xiang; King, Irwin

    2008-01-01

    The Biased Minimax Probability Machine (BMPM) constructs a classifier which deals with the imbalanced learning tasks. It provides a worst-case bound on the probability of misclassification of future data points based on reliable estimates of means and covariance matrices of the classes from the training data samples, and achieves promising performance. In this paper, we develop a novel yet critical extension training algorithm for BMPM that is based on Second-Order Cone Programming (SOCP). Moreover, we apply the biased classification model to medical diagnosis problems to demonstrate its usefulness. By removing some crucial assumptions in the original solution to this model, we make the new method more accurate and robust. We outline the theoretical derivatives of the biased classification model, and reformulate it into an SOCP problem which could be efficiently solved with global optima guarantee. We evaluate our proposed SOCP-based BMPM (BMPMSOCP) scheme in comparison with traditional solutions on medical diagnosis tasks where the objectives are to focus on improving the sensitivity (the accuracy of the more important class, say "ill" samples) instead of the overall accuracy of the classification. Empirical results have shown that our method is more effective and robust to handle imbalanced classification problems than traditional classification approaches, and the original Fractional Programming-based BMPM (BMPMFP).

  19. On the evaluation of the fidelity of supervised classifiers in the prediction of chimeric RNAs.

    PubMed

    Beaumeunier, Sacha; Audoux, Jérôme; Boureux, Anthony; Ruffle, Florence; Commes, Thérèse; Philippe, Nicolas; Alves, Ronnie

    2016-01-01

    High-throughput sequencing technology and bioinformatics have identified chimeric RNAs (chRNAs), raising the possibility of chRNAs expressing particularly in diseases can be used as potential biomarkers in both diagnosis and prognosis. The task of discriminating true chRNAs from the false ones poses an interesting Machine Learning (ML) challenge. First of all, the sequencing data may contain false reads due to technical artifacts and during the analysis process, bioinformatics tools may generate false positives due to methodological biases. Moreover, if we succeed to have a proper set of observations (enough sequencing data) about true chRNAs, chances are that the devised model can not be able to generalize beyond it. Like any other machine learning problem, the first big issue is finding the good data to build models. As far as we were concerned, there is no common benchmark data available for chRNAs detection. The definition of a classification baseline is lacking in the related literature too. In this work we are moving towards benchmark data and an evaluation of the fidelity of supervised classifiers in the prediction of chRNAs. We proposed a modelization strategy that can be used to increase the tools performances in context of chRNA classification based on a simulated data generator, that permit to continuously integrate new complex chimeric events. The pipeline incorporated a genome mutation process and simulated RNA-seq data. The reads within distinct depth were aligned and analysed by CRAC that integrates genomic location and local coverage, allowing biological predictions at the read scale. Additionally, these reads were functionally annotated and aggregated to form chRNAs events, making it possible to evaluate ML methods (classifiers) performance in both levels of reads and events. Ensemble learning strategies demonstrated to be more robust to this classification problem, providing an average AUC performance of 95 % (ACC=94 %, Kappa=0.87 %). The resulting classification models were also tested on real RNA-seq data from a set of twenty-seven patients with acute myeloid leukemia (AML).

  20. Algorithms exploiting ultrasonic sensors for subject classification

    NASA Astrophysics Data System (ADS)

    Desai, Sachi; Quoraishee, Shafik

    2009-09-01

    Proposed here is a series of techniques exploiting micro-Doppler ultrasonic sensors capable of characterizing various detected mammalian targets based on their physiological movements captured a series of robust features. Employed is a combination of unique and conventional digital signal processing techniques arranged in such a manner they become capable of classifying a series of walkers. These processes for feature extraction develops a robust feature space capable of providing discrimination of various movements generated from bipeds and quadrupeds and further subdivided into large or small. These movements can be exploited to provide specific information of a given signature dividing it in a series of subset signatures exploiting wavelets to generate start/stop times. After viewing a series spectrograms of the signature we are able to see distinct differences and utilizing kurtosis, we generate an envelope detector capable of isolating each of the corresponding step cycles generated during a walk. The walk cycle is defined as one complete sequence of walking/running from the foot pushing off the ground and concluding when returning to the ground. This time information segments the events that are readily seen in the spectrogram but obstructed in the temporal domain into individual walk sequences. This walking sequence is then subsequently translated into a three dimensional waterfall plot defining the expected energy value associated with the motion at particular instance of time and frequency. The value is capable of being repeatable for each particular class and employable to discriminate the events. Highly reliable classification is realized exploiting a classifier trained on a candidate sample space derived from the associated gyrations created by motion from actors of interest. The classifier developed herein provides a capability to classify events as an adult humans, children humans, horses, and dogs at potentially high rates based on the tested sample space. The algorithm developed and described will provide utility to an underused sensor modality for human intrusion detection because of the current high-rate of generated false alarms. The active ultrasonic sensor coupled in a multi-modal sensor suite with binary, less descriptive sensors like seismic devices realizing a greater accuracy rate for detection of persons of interest for homeland purposes.

  1. Using Unidimensional IRT Models for Dichotomous Classification via Computerized Classification Testing with Multidimensional Data.

    ERIC Educational Resources Information Center

    Lau, Che-Ming Allen; And Others

    This study focused on the robustness of unidimensional item response theory (UIRT) models in computerized classification testing against violation of the unidimensionality assumption. The study addressed whether UIRT models remain acceptable under various testing conditions and dimensionality strengths. Monte Carlo simulation techniques were used…

  2. Dimensionality-varied convolutional neural network for spectral-spatial classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Liu, Wanjun; Liang, Xuejian; Qu, Haicheng

    2017-11-01

    Hyperspectral image (HSI) classification is one of the most popular topics in remote sensing community. Traditional and deep learning-based classification methods were proposed constantly in recent years. In order to improve the classification accuracy and robustness, a dimensionality-varied convolutional neural network (DVCNN) was proposed in this paper. DVCNN was a novel deep architecture based on convolutional neural network (CNN). The input of DVCNN was a set of 3D patches selected from HSI which contained spectral-spatial joint information. In the following feature extraction process, each patch was transformed into some different 1D vectors by 3D convolution kernels, which were able to extract features from spectral-spatial data. The rest of DVCNN was about the same as general CNN and processed 2D matrix which was constituted by by all 1D data. So that the DVCNN could not only extract more accurate and rich features than CNN, but also fused spectral-spatial information to improve classification accuracy. Moreover, the robustness of network on water-absorption bands was enhanced in the process of spectral-spatial fusion by 3D convolution, and the calculation was simplified by dimensionality varied convolution. Experiments were performed on both Indian Pines and Pavia University scene datasets, and the results showed that the classification accuracy of DVCNN improved by 32.87% on Indian Pines and 19.63% on Pavia University scene than spectral-only CNN. The maximum accuracy improvement of DVCNN achievement was 13.72% compared with other state-of-the-art HSI classification methods, and the robustness of DVCNN on water-absorption bands noise was demonstrated.

  3. Applications of Location Similarity Measures and Conceptual Spaces to Event Coreference and Classification

    ERIC Educational Resources Information Center

    McConky, Katie Theresa

    2013-01-01

    This work covers topics in event coreference and event classification from spoken conversation. Event coreference is the process of identifying descriptions of the same event across sentences, documents, or structured databases. Existing event coreference work focuses on sentence similarity models or feature based similarity models requiring slot…

  4. Automated classification of seismic sources in a large database: a comparison of Random Forests and Deep Neural Networks.

    NASA Astrophysics Data System (ADS)

    Hibert, Clement; Stumpf, André; Provost, Floriane; Malet, Jean-Philippe

    2017-04-01

    In the past decades, the increasing quality of seismic sensors and capability to transfer remotely large quantity of data led to a fast densification of local, regional and global seismic networks for near real-time monitoring of crustal and surface processes. This technological advance permits the use of seismology to document geological and natural/anthropogenic processes (volcanoes, ice-calving, landslides, snow and rock avalanches, geothermal fields), but also led to an ever-growing quantity of seismic data. This wealth of seismic data makes the construction of complete seismicity catalogs, which include earthquakes but also other sources of seismic waves, more challenging and very time-consuming as this critical pre-processing stage is classically done by human operators and because hundreds of thousands of seismic signals have to be processed. To overcome this issue, the development of automatic methods for the processing of continuous seismic data appears to be a necessity. The classification algorithm should satisfy the need of a method that is robust, precise and versatile enough to be deployed to monitor the seismicity in very different contexts. In this study, we evaluate the ability of machine learning algorithms for the analysis of seismic sources at the Piton de la Fournaise volcano being Random Forest and Deep Neural Network classifiers. We gather a catalog of more than 20,000 events, belonging to 8 classes of seismic sources. We define 60 attributes, based on the waveform, the frequency content and the polarization of the seismic waves, to parameterize the seismic signals recorded. We show that both algorithms provide similar positive classification rates, with values exceeding 90% of the events. When trained with a sufficient number of events, the rate of positive identification can reach 99%. These very high rates of positive identification open the perspective of an operational implementation of these algorithms for near-real time monitoring of mass movements and other environmental sources at the local, regional and even global scale.

  5. Subsurface event detection and classification using Wireless Signal Networks.

    PubMed

    Yoon, Suk-Un; Ghazanfari, Ehsan; Cheng, Liang; Pamukcu, Sibel; Suleiman, Muhannad T

    2012-11-05

    Subsurface environment sensing and monitoring applications such as detection of water intrusion or a landslide, which could significantly change the physical properties of the host soil, can be accomplished using a novel concept, Wireless Signal Networks (WSiNs). The wireless signal networks take advantage of the variations of radio signal strength on the distributed underground sensor nodes of WSiNs to monitor and characterize the sensed area. To characterize subsurface environments for event detection and classification, this paper provides a detailed list and experimental data of soil properties on how radio propagation is affected by soil properties in subsurface communication environments. Experiments demonstrated that calibrated wireless signal strength variations can be used as indicators to sense changes in the subsurface environment. The concept of WSiNs for the subsurface event detection is evaluated with applications such as detection of water intrusion, relative density change, and relative motion using actual underground sensor nodes. To classify geo-events using the measured signal strength as a main indicator of geo-events, we propose a window-based minimum distance classifier based on Bayesian decision theory. The window-based classifier for wireless signal networks has two steps: event detection and event classification. With the event detection, the window-based classifier classifies geo-events on the event occurring regions that are called a classification window. The proposed window-based classification method is evaluated with a water leakage experiment in which the data has been measured in laboratory experiments. In these experiments, the proposed detection and classification method based on wireless signal network can detect and classify subsurface events.

  6. Subsurface Event Detection and Classification Using Wireless Signal Networks

    PubMed Central

    Yoon, Suk-Un; Ghazanfari, Ehsan; Cheng, Liang; Pamukcu, Sibel; Suleiman, Muhannad T.

    2012-01-01

    Subsurface environment sensing and monitoring applications such as detection of water intrusion or a landslide, which could significantly change the physical properties of the host soil, can be accomplished using a novel concept, Wireless Signal Networks (WSiNs). The wireless signal networks take advantage of the variations of radio signal strength on the distributed underground sensor nodes of WSiNs to monitor and characterize the sensed area. To characterize subsurface environments for event detection and classification, this paper provides a detailed list and experimental data of soil properties on how radio propagation is affected by soil properties in subsurface communication environments. Experiments demonstrated that calibrated wireless signal strength variations can be used as indicators to sense changes in the subsurface environment. The concept of WSiNs for the subsurface event detection is evaluated with applications such as detection of water intrusion, relative density change, and relative motion using actual underground sensor nodes. To classify geo-events using the measured signal strength as a main indicator of geo-events, we propose a window-based minimum distance classifier based on Bayesian decision theory. The window-based classifier for wireless signal networks has two steps: event detection and event classification. With the event detection, the window-based classifier classifies geo-events on the event occurring regions that are called a classification window. The proposed window-based classification method is evaluated with a water leakage experiment in which the data has been measured in laboratory experiments. In these experiments, the proposed detection and classification method based on wireless signal network can detect and classify subsurface events. PMID:23202191

  7. Hierarchical structure for audio-video based semantic classification of sports video sequences

    NASA Astrophysics Data System (ADS)

    Kolekar, M. H.; Sengupta, S.

    2005-07-01

    A hierarchical structure for sports event classification based on audio and video content analysis is proposed in this paper. Compared to the event classifications in other games, those of cricket are very challenging and yet unexplored. We have successfully solved cricket video classification problem using a six level hierarchical structure. The first level performs event detection based on audio energy and Zero Crossing Rate (ZCR) of short-time audio signal. In the subsequent levels, we classify the events based on video features using a Hidden Markov Model implemented through Dynamic Programming (HMM-DP) using color or motion as a likelihood function. For some of the game-specific decisions, a rule-based classification is also performed. Our proposed hierarchical structure can easily be applied to any other sports. Our results are very promising and we have moved a step forward towards addressing semantic classification problems in general.

  8. Peak Detection Method Evaluation for Ion Mobility Spectrometry by Using Machine Learning Approaches

    PubMed Central

    Hauschild, Anne-Christin; Kopczynski, Dominik; D’Addario, Marianna; Baumbach, Jörg Ingo; Rahmann, Sven; Baumbach, Jan

    2013-01-01

    Ion mobility spectrometry with pre-separation by multi-capillary columns (MCC/IMS) has become an established inexpensive, non-invasive bioanalytics technology for detecting volatile organic compounds (VOCs) with various metabolomics applications in medical research. To pave the way for this technology towards daily usage in medical practice, different steps still have to be taken. With respect to modern biomarker research, one of the most important tasks is the automatic classification of patient-specific data sets into different groups, healthy or not, for instance. Although sophisticated machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region-merging with VisualNow, and peak model estimation (PME). We manually generated a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods and systematically study their classification performance based on the four peak detectors’ results. Second, we investigate the classification variance and robustness regarding perturbation and overfitting. Our main finding is that the power of the classification accuracy is almost equally good for all methods, the manually created gold standard as well as the four automatic peak finding methods. In addition, we note that all tools, manual and automatic, are similarly robust against perturbations. However, the classification performance is more robust against overfitting when using the PME as peak calling preprocessor. In summary, we conclude that all methods, though small differences exist, are largely reliable and enable a wide spectrum of real-world biomedical applications. PMID:24957992

  9. Peak detection method evaluation for ion mobility spectrometry by using machine learning approaches.

    PubMed

    Hauschild, Anne-Christin; Kopczynski, Dominik; D'Addario, Marianna; Baumbach, Jörg Ingo; Rahmann, Sven; Baumbach, Jan

    2013-04-16

    Ion mobility spectrometry with pre-separation by multi-capillary columns (MCC/IMS) has become an established inexpensive, non-invasive bioanalytics technology for detecting volatile organic compounds (VOCs) with various metabolomics applications in medical research. To pave the way for this technology towards daily usage in medical practice, different steps still have to be taken. With respect to modern biomarker research, one of the most important tasks is the automatic classification of patient-specific data sets into different groups, healthy or not, for instance. Although sophisticated machine learning methods exist, an inevitable preprocessing step is reliable and robust peak detection without manual intervention. In this work we evaluate four state-of-the-art approaches for automated IMS-based peak detection: local maxima search, watershed transformation with IPHEx, region-merging with VisualNow, and peak model estimation (PME).We manually generated Metabolites 2013, 3 278 a gold standard with the aid of a domain expert (manual) and compare the performance of the four peak calling methods with respect to two distinct criteria. We first utilize established machine learning methods and systematically study their classification performance based on the four peak detectors' results. Second, we investigate the classification variance and robustness regarding perturbation and overfitting. Our main finding is that the power of the classification accuracy is almost equally good for all methods, the manually created gold standard as well as the four automatic peak finding methods. In addition, we note that all tools, manual and automatic, are similarly robust against perturbations. However, the classification performance is more robust against overfitting when using the PME as peak calling preprocessor. In summary, we conclude that all methods, though small differences exist, are largely reliable and enable a wide spectrum of real-world biomedical applications.

  10. Integrated Low-Rank-Based Discriminative Feature Learning for Recognition.

    PubMed

    Zhou, Pan; Lin, Zhouchen; Zhang, Chao

    2016-05-01

    Feature learning plays a central role in pattern recognition. In recent years, many representation-based feature learning methods have been proposed and have achieved great success in many applications. However, these methods perform feature learning and subsequent classification in two separate steps, which may not be optimal for recognition tasks. In this paper, we present a supervised low-rank-based approach for learning discriminative features. By integrating latent low-rank representation (LatLRR) with a ridge regression-based classifier, our approach combines feature learning with classification, so that the regulated classification error is minimized. In this way, the extracted features are more discriminative for the recognition tasks. Our approach benefits from a recent discovery on the closed-form solutions to noiseless LatLRR. When there is noise, a robust Principal Component Analysis (PCA)-based denoising step can be added as preprocessing. When the scale of a problem is large, we utilize a fast randomized algorithm to speed up the computation of robust PCA. Extensive experimental results demonstrate the effectiveness and robustness of our method.

  11. Designing and Implementing a Retrospective Earthquake Detection Framework at the U.S. Geological Survey National Earthquake Information Center

    NASA Astrophysics Data System (ADS)

    Patton, J.; Yeck, W.; Benz, H.

    2017-12-01

    The U.S. Geological Survey National Earthquake Information Center (USGS NEIC) is implementing and integrating new signal detection methods such as subspace correlation, continuous beamforming, multi-band picking and automatic phase identification into near-real-time monitoring operations. Leveraging the additional information from these techniques help the NEIC utilize a large and varied network on local to global scales. The NEIC is developing an ordered, rapid, robust, and decentralized framework for distributing seismic detection data as well as a set of formalized formatting standards. These frameworks and standards enable the NEIC to implement a seismic event detection framework that supports basic tasks, including automatic arrival time picking, social media based event detections, and automatic association of different seismic detection data into seismic earthquake events. In addition, this framework enables retrospective detection processing such as automated S-wave arrival time picking given a detected event, discrimination and classification of detected events by type, back-azimuth and slowness calculations, and ensuring aftershock and induced sequence detection completeness. These processes and infrastructure improve the NEIC's capabilities, accuracy, and speed of response. In addition, this same infrastructure provides an improved and convenient structure to support access to automatic detection data for both research and algorithmic development.

  12. Evolutionary hierarchy of vertebrate-like heterotrimeric G protein families.

    PubMed

    Krishnan, Arunkumar; Mustafa, Arshi; Almén, Markus Sällman; Fredriksson, Robert; Williams, Michael J; Schiöth, Helgi B

    2015-10-01

    Heterotrimeric G proteins perform a crucial role as molecular switches controlling various cellular responses mediated by G protein-coupled receptor (GPCR) signaling pathway. Recent data have shown that the vertebrate-like G protein families are found across metazoans and their closest unicellular relatives. However, an overall evolutionary hierarchy of vertebrate-like G proteins, including gene family annotations and in particular mapping individual gene gain/loss events across diverse holozoan lineages is still incomplete. Here, with more expanded invertebrate taxon sampling, we have reconstructed phylogenetic trees for each of the G protein classes/families and provide a robust classification and hierarchy of vertebrate-like heterotrimeric G proteins. Our results further extend the evidence that the common ancestor (CA) of holozoans had at least five ancestral Gα genes corresponding to all major vertebrate Gα classes and contain a total of eight genes including two Gβ and one Gγ. Our results also indicate that the GNAI/O-like gene likely duplicated in the last CA of metazoans to give rise to GNAI- and GNAO-like genes, which are conserved across invertebrates. Moreover, homologs of GNB1-4 paralogon- and GNB5 family-like genes are found in most metazoans and that the unicellular holozoans encode two ancestral Gβ genes. Similarly, most bilaterian invertebrates encode two Gγ genes which include a representative of the GNG gene cluster and a putative homolog of GNG13. Interestingly, our results also revealed key evolutionary events such as the Drosophila melanogaster eye specific Gβ subunit that is found conserved in most arthropods and several previously unidentified species specific expansions within Gαi/o, Gαs, Gαq, Gα12/13 classes and the GNB1-4 paralogon. Also, we provide an overall proposed evolutionary scenario on the expansions of all G protein families in vertebrate tetraploidizations. Our robust classification/hierarchy is essential to further understand the differential roles of GPCR/G protein mediated intracellular signaling system across various metazoan lineages. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Automated Classification of Medical Percussion Signals for the Diagnosis of Pulmonary Injuries

    NASA Astrophysics Data System (ADS)

    Bhuiyan, Md Moinuddin

    Used for centuries in the clinical practice, audible percussion is a method of eliciting sounds by areas of the human body either by finger tips or by a percussion hammer. Despite its advantages, pulmonary diagnostics by percussion is still highly subjective, depends on the physician's skills, and requires quiet surroundings. Automation of this well-established technique could help amplify its existing merits while removing the above drawbacks. In this study, an attempt is made to automatically decompose clinical percussion signals into a sum of Exponentially Damped Sinusoids (EDS) using Matrix Pencil Method, which in this case form a more natural basis than Fourier harmonics and thus allow for a more robust representation of the signal in the parametric space. It is found that some EDS represent transient oscillation modes of the thorax/abdomen excited by the percussion event, while others are associated with the noise. It is demonstrated that relatively few EDS are usually enough to accurately reconstruct the original signal. It is shown that combining the frequency and damping parameters of these most significant EDS allows for efficient classification of percussion signals into the two main types historically known as "resonant" and "tympanic". This classification ability can provide a basis for the automated objective diagnostics of various pulmonary pathologies including pneumothorax.

  14. Music-Elicited Emotion Identification Using Optical Flow Analysis of Human Face

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Smirnova, Z. N.

    2015-05-01

    Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.

  15. Real-time detection and classification of anomalous events in streaming data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferragut, Erik M.; Goodall, John R.; Iannacone, Michael D.

    2016-04-19

    A system is described for receiving a stream of events and scoring the events based on anomalousness and maliciousness (or other classification). The events can be displayed to a user in user-defined groupings in an animated fashion. The system can include a plurality of anomaly detectors that together implement an algorithm to identify low probability events and detect atypical traffic patterns. The atypical traffic patterns can then be classified as being of interest or not. In one particular example, in a network environment, the classification can be whether the network traffic is malicious or not.

  16. Seismic event classification system

    DOEpatents

    Dowla, F.U.; Jarpe, S.P.; Maurer, W.

    1994-12-13

    In the computer interpretation of seismic data, the critical first step is to identify the general class of an unknown event. For example, the classification might be: teleseismic, regional, local, vehicular, or noise. Self-organizing neural networks (SONNs) can be used for classifying such events. Both Kohonen and Adaptive Resonance Theory (ART) SONNs are useful for this purpose. Given the detection of a seismic event and the corresponding signal, computation is made of: the time-frequency distribution, its binary representation, and finally a shift-invariant representation, which is the magnitude of the two-dimensional Fourier transform (2-D FFT) of the binary time-frequency distribution. This pre-processed input is fed into the SONNs. These neural networks are able to group events that look similar. The ART SONN has an advantage in classifying the event because the types of cluster groups do not need to be pre-defined. The results from the SONNs together with an expert seismologist's classification are then used to derive event classification probabilities. 21 figures.

  17. Seismic event classification system

    DOEpatents

    Dowla, Farid U.; Jarpe, Stephen P.; Maurer, William

    1994-01-01

    In the computer interpretation of seismic data, the critical first step is to identify the general class of an unknown event. For example, the classification might be: teleseismic, regional, local, vehicular, or noise. Self-organizing neural networks (SONNs) can be used for classifying such events. Both Kohonen and Adaptive Resonance Theory (ART) SONNs are useful for this purpose. Given the detection of a seismic event and the corresponding signal, computation is made of: the time-frequency distribution, its binary representation, and finally a shift-invariant representation, which is the magnitude of the two-dimensional Fourier transform (2-D FFT) of the binary time-frequency distribution. This pre-processed input is fed into the SONNs. These neural networks are able to group events that look similar. The ART SONN has an advantage in classifying the event because the types of cluster groups do not need to be pre-defined. The results from the SONNs together with an expert seismologist's classification are then used to derive event classification probabilities.

  18. Neural robust stabilization via event-triggering mechanism and adaptive learning technique.

    PubMed

    Wang, Ding; Liu, Derong

    2018-06-01

    The robust control synthesis of continuous-time nonlinear systems with uncertain term is investigated via event-triggering mechanism and adaptive critic learning technique. We mainly focus on combining the event-triggering mechanism with adaptive critic designs, so as to solve the nonlinear robust control problem. This can not only make better use of computation and communication resources, but also conduct controller design from the view of intelligent optimization. Through theoretical analysis, the nonlinear robust stabilization can be achieved by obtaining an event-triggered optimal control law of the nominal system with a newly defined cost function and a certain triggering condition. The adaptive critic technique is employed to facilitate the event-triggered control design, where a neural network is introduced as an approximator of the learning phase. The performance of the event-triggered robust control scheme is validated via simulation studies and comparisons. The present method extends the application domain of both event-triggered control and adaptive critic control to nonlinear systems possessing dynamical uncertainties. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Advanced eddy current test signal analysis for steam generator tube defect classification and characterization

    NASA Astrophysics Data System (ADS)

    McClanahan, James Patrick

    Eddy Current Testing (ECT) is a Non-Destructive Examination (NDE) technique that is widely used in power generating plants (both nuclear and fossil) to test the integrity of heat exchanger (HX) and steam generator (SG) tubing. Specifically for this research, laboratory-generated, flawed tubing data were examined. The purpose of this dissertation is to develop and implement an automated method for the classification and an advanced characterization of defects in HX and SG tubing. These two improvements enhanced the robustness of characterization as compared to traditional bobbin-coil ECT data analysis methods. A more robust classification and characterization of the tube flaw in-situ (while the SG is on-line but not when the plant is operating), should provide valuable information to the power industry. The following are the conclusions reached from this research. A feature extraction program acquiring relevant information from both the mixed, absolute and differential data was successfully implemented. The CWT was utilized to extract more information from the mixed, complex differential data. Image Processing techniques used to extract the information contained in the generated CWT, classified the data with a high success rate. The data were accurately classified, utilizing the compressed feature vector and using a Bayes classification system. An estimation of the upper bound for the probability of error, using the Bhattacharyya distance, was successfully applied to the Bayesian classification. The classified data were separated according to flaw-type (classification) to enhance characterization. The characterization routine used dedicated, flaw-type specific ANNs that made the characterization of the tube flaw more robust. The inclusion of outliers may help complete the feature space so that classification accuracy is increased. Given that the eddy current test signals appear very similar, there may not be sufficient information to make an extremely accurate (>95%) classification or an advanced characterization using this system. It is necessary to have a larger database fore more accurate system learning.

  20. Formulation and Optimization of Robust Sensor Placement Problems for Drinking Water Contamination Warning Systems

    DOE PAGES

    Watson, Jean-Paul; Murray, Regan; Hart, William E.

    2009-11-13

    We report that the sensor placement problem in contamination warning system design for municipal water distribution networks involves maximizing the protection level afforded by limited numbers of sensors, typically quantified as the expected impact of a contamination event; the issue of how to mitigate against high-consequence events is either handled implicitly or ignored entirely. Consequently, expected-case sensor placements run the risk of failing to protect against high-consequence 9/11-style attacks. In contrast, robust sensor placements address this concern by focusing strictly on high-consequence events and placing sensors to minimize the impact of these events. We introduce several robust variations of themore » sensor placement problem, distinguished by how they quantify the potential damage due to high-consequence events. We explore the nature of robust versus expected-case sensor placements on three real-world large-scale distribution networks. We find that robust sensor placements can yield large reductions in the number and magnitude of high-consequence events, with only modest increases in expected impact. Finally, the ability to trade-off between robust and expected-case impacts is a key unexplored dimension in contamination warning system design.« less

  1. A Real-Time Infrared Ultra-Spectral Signature Classification Method via Spatial Pyramid Matching

    PubMed Central

    Mei, Xiaoguang; Ma, Yong; Li, Chang; Fan, Fan; Huang, Jun; Ma, Jiayi

    2015-01-01

    The state-of-the-art ultra-spectral sensor technology brings new hope for high precision applications due to its high spectral resolution. However, it also comes with new challenges, such as the high data dimension and noise problems. In this paper, we propose a real-time method for infrared ultra-spectral signature classification via spatial pyramid matching (SPM), which includes two aspects. First, we introduce an infrared ultra-spectral signature similarity measure method via SPM, which is the foundation of the matching-based classification method. Second, we propose the classification method with reference spectral libraries, which utilizes the SPM-based similarity for the real-time infrared ultra-spectral signature classification with robustness performance. Specifically, instead of matching with each spectrum in the spectral library, our method is based on feature matching, which includes a feature library-generating phase. We calculate the SPM-based similarity between the feature of the spectrum and that of each spectrum of the reference feature library, then take the class index of the corresponding spectrum having the maximum similarity as the final result. Experimental comparisons on two publicly-available datasets demonstrate that the proposed method effectively improves the real-time classification performance and robustness to noise. PMID:26205263

  2. Analysis of signals under compositional noise with applications to SONAR data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tucker, J. Derek; Wu, Wei; Srivastava, Anuj

    2013-07-09

    In this paper, we consider the problem of denoising and classification of SONAR signals observed under compositional noise, i.e., they have been warped randomly along the x-axis. The traditional techniques do not account for such noise and, consequently, cannot provide a robust classification of signals. We apply a recent framework that: 1) uses a distance-based objective function for data alignment and noise reduction; and 2) leads to warping-invariant distances between signals for robust clustering and classification. We use this framework to introduce two distances that can be used for signal classification: a) a y-distance, which is the distance between themore » aligned signals; and b) an x-distance that measures the amount of warping needed to align the signals. We focus on the task of clustering and classifying objects, using acoustic spectrum (acoustic color), which is complicated by the uncertainties in aspect angles at data collections. Small changes in the aspect angles corrupt signals in a way that amounts to compositional noise. As a result, we demonstrate the use of the developed metrics in classification of acoustic color data and highlight improvements in signal classification over current methods.« less

  3. Dimensionality-varied deep convolutional neural network for spectral-spatial classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Qu, Haicheng; Liang, Xuejian; Liang, Shichao; Liu, Wanjun

    2018-01-01

    Many methods of hyperspectral image classification have been proposed recently, and the convolutional neural network (CNN) achieves outstanding performance. However, spectral-spatial classification of CNN requires an excessively large model, tremendous computations, and complex network, and CNN is generally unable to use the noisy bands caused by water-vapor absorption. A dimensionality-varied CNN (DV-CNN) is proposed to address these issues. There are four stages in DV-CNN and the dimensionalities of spectral-spatial feature maps vary with the stages. DV-CNN can reduce the computation and simplify the structure of the network. All feature maps are processed by more kernels in higher stages to extract more precise features. DV-CNN also improves the classification accuracy and enhances the robustness to water-vapor absorption bands. The experiments are performed on data sets of Indian Pines and Pavia University scene. The classification performance of DV-CNN is compared with state-of-the-art methods, which contain the variations of CNN, traditional, and other deep learning methods. The experiment of performance analysis about DV-CNN itself is also carried out. The experimental results demonstrate that DV-CNN outperforms state-of-the-art methods for spectral-spatial classification and it is also robust to water-vapor absorption bands. Moreover, reasonable parameters selection is effective to improve classification accuracy.

  4. Robust artifactual independent component classification for BCI practitioners.

    PubMed

    Winkler, Irene; Brandl, Stephanie; Horn, Franziska; Waldburger, Eric; Allefeld, Carsten; Tangermann, Michael

    2014-06-01

    EEG artifacts of non-neural origin can be separated from neural signals by independent component analysis (ICA). It is unclear (1) how robustly recently proposed artifact classifiers transfer to novel users, novel paradigms or changed electrode setups, and (2) how artifact cleaning by a machine learning classifier impacts the performance of brain-computer interfaces (BCIs). Addressing (1), the robustness of different strategies with respect to the transfer between paradigms and electrode setups of a recently proposed classifier is investigated on offline data from 35 users and 3 EEG paradigms, which contain 6303 expert-labeled components from two ICA and preprocessing variants. Addressing (2), the effect of artifact removal on single-trial BCI classification is estimated on BCI trials from 101 users and 3 paradigms. We show that (1) the proposed artifact classifier generalizes to completely different EEG paradigms. To obtain similar results under massively reduced electrode setups, a proposed novel strategy improves artifact classification. Addressing (2), ICA artifact cleaning has little influence on average BCI performance when analyzed by state-of-the-art BCI methods. When slow motor-related features are exploited, performance varies strongly between individuals, as artifacts may obstruct relevant neural activity or are inadvertently used for BCI control. Robustness of the proposed strategies can be reproduced by EEG practitioners as the method is made available as an EEGLAB plug-in.

  5. Robust diagnosis of non-Hodgkin lymphoma phenotypes validated on gene expression data from different laboratories.

    PubMed

    Bhanot, Gyan; Alexe, Gabriela; Levine, Arnold J; Stolovitzky, Gustavo

    2005-01-01

    A major challenge in cancer diagnosis from microarray data is the need for robust, accurate, classification models which are independent of the analysis techniques used and can combine data from different laboratories. We propose such a classification scheme originally developed for phenotype identification from mass spectrometry data. The method uses a robust multivariate gene selection procedure and combines the results of several machine learning tools trained on raw and pattern data to produce an accurate meta-classifier. We illustrate and validate our method by applying it to gene expression datasets: the oligonucleotide HuGeneFL microarray dataset of Shipp et al. (www.genome.wi.mit.du/MPR/lymphoma) and the Hu95Av2 Affymetrix dataset (DallaFavera's laboratory, Columbia University). Our pattern-based meta-classification technique achieves higher predictive accuracies than each of the individual classifiers , is robust against data perturbations and provides subsets of related predictive genes. Our techniques predict that combinations of some genes in the p53 pathway are highly predictive of phenotype. In particular, we find that in 80% of DLBCL cases the mRNA level of at least one of the three genes p53, PLK1 and CDK2 is elevated, while in 80% of FL cases, the mRNA level of at most one of them is elevated.

  6. Bin Ratio-Based Histogram Distances and Their Application to Image Classification.

    PubMed

    Hu, Weiming; Xie, Nianhua; Hu, Ruiguang; Ling, Haibin; Chen, Qiang; Yan, Shuicheng; Maybank, Stephen

    2014-12-01

    Large variations in image background may cause partial matching and normalization problems for histogram-based representations, i.e., the histograms of the same category may have bins which are significantly different, and normalization may produce large changes in the differences between corresponding bins. In this paper, we deal with this problem by using the ratios between bin values of histograms, rather than bin values' differences which are used in the traditional histogram distances. We propose a bin ratio-based histogram distance (BRD), which is an intra-cross-bin distance, in contrast with previous bin-to-bin distances and cross-bin distances. The BRD is robust to partial matching and histogram normalization, and captures correlations between bins with only a linear computational complexity. We combine the BRD with the ℓ1 histogram distance and the χ(2) histogram distance to generate the ℓ1 BRD and the χ(2) BRD, respectively. These combinations exploit and benefit from the robustness of the BRD under partial matching and the robustness of the ℓ1 and χ(2) distances to small noise. We propose a method for assessing the robustness of histogram distances to partial matching. The BRDs and logistic regression-based histogram fusion are applied to image classification. The experimental results on synthetic data sets show the robustness of the BRDs to partial matching, and the experiments on seven benchmark data sets demonstrate promising results of the BRDs for image classification.

  7. Neutral face classification using personalized appearance models for fast and robust emotion detection.

    PubMed

    Chiranjeevi, Pojala; Gopalakrishnan, Viswanath; Moogi, Pratibha

    2015-09-01

    Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning-based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, and so on, in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as user stays neutral for majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this paper, we propose a light-weight neutral versus emotion classification engine, which acts as a pre-processer to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at key emotion (KE) points using a statistical texture model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a statistical texture model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves emotion recognition (ER) accuracy and simultaneously reduces computational complexity of the ER system, as validated on multiple databases.

  8. Robustness of NMR-based metabolomics to generate comparable data sets for olive oil cultivar classification. An inter-laboratory study on Apulian olive oils.

    PubMed

    Piccinonna, Sara; Ragone, Rosa; Stocchero, Matteo; Del Coco, Laura; De Pascali, Sandra Angelica; Schena, Francesco Paolo; Fanizzi, Francesco Paolo

    2016-05-15

    Nuclear Magnetic Resonance (NMR) spectroscopy is emerging as a powerful technique in olive oil fingerprinting, but its analytical robustness has to be proved. Here, we report a comparative study between two laboratories on olive oil (1)H NMR fingerprinting, aiming to demonstrate the robustness of NMR-based metabolomics in generating comparable data sets for cultivar classification. Sample preparation and data acquisition were performed independently in two laboratories, equipped with different resolution spectrometers (400 and 500 MHz), using two identical sets of mono-varietal olive oils. Partial Least Squares (PLS)-based techniques were applied to compare the data sets produced by the two laboratories. Despite differences in spectrum baseline, and in intensity and shape of peaks, the amount of shared information was significant (almost 70%) and related to cultivar (same metabolites discriminated between cultivars). In conclusion, regardless of the variability due to operator and machine, the data sets from the two participating units were comparable for the purpose of classification. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Using robust principal component analysis to alleviate day-to-day variability in EEG based emotion classification.

    PubMed

    Ping-Keng Jao; Yuan-Pin Lin; Yi-Hsuan Yang; Tzyy-Ping Jung

    2015-08-01

    An emerging challenge for emotion classification using electroencephalography (EEG) is how to effectively alleviate day-to-day variability in raw data. This study employed the robust principal component analysis (RPCA) to address the problem with a posed hypothesis that background or emotion-irrelevant EEG perturbations lead to certain variability across days and somehow submerge emotion-related EEG dynamics. The empirical results of this study evidently validated our hypothesis and demonstrated the RPCA's feasibility through the analysis of a five-day dataset of 12 subjects. The RPCA allowed tackling the sparse emotion-relevant EEG dynamics from the accompanied background perturbations across days. Sequentially, leveraging the RPCA-purified EEG trials from more days appeared to improve the emotion-classification performance steadily, which was not found in the case using the raw EEG features. Therefore, incorporating the RPCA with existing emotion-aware machine-learning frameworks on a longitudinal dataset of each individual may shed light on the development of a robust affective brain-computer interface (ABCI) that can alleviate ecological inter-day variability.

  10. Morbidity Assessment in Surgery: Refinement Proposal Based on a Concept of Perioperative Adverse Events

    PubMed Central

    Kazaryan, Airazat M.; Røsok, Bård I.; Edwin, Bjørn

    2013-01-01

    Background. Morbidity is a cornerstone assessing surgical treatment; nevertheless surgeons have not reached extensive consensus on this problem. Methods and Findings. Clavien, Dindo, and Strasberg with coauthors (1992, 2004, 2009, and 2010) made significant efforts to the standardization of surgical morbidity (Clavien-Dindo-Strasberg classification, last revision, the Accordion classification). However, this classification includes only postoperative complications and has two principal shortcomings: disregard of intraoperative events and confusing terminology. Postoperative events have a major impact on patient well-being. However, intraoperative events should also be recorded and reported even if they do not evidently affect the patient's postoperative well-being. The term surgical complication applied in the Clavien-Dindo-Strasberg classification may be regarded as an incident resulting in a complication caused by technical failure of surgery, in contrast to the so-called medical complications. Therefore, the term surgical complication contributes to misinterpretation of perioperative morbidity. The term perioperative adverse events comprising both intraoperative unfavourable incidents and postoperative complications could be regarded as better alternative. In 2005, Satava suggested a simple grading to evaluate intraoperative surgical errors. Based on that approach, we have elaborated a 3-grade classification of intraoperative incidents so that it can be used to grade intraoperative events of any type of surgery. Refinements have been made to the Accordion classification of postoperative complications. Interpretation. The proposed systematization of perioperative adverse events utilizing the combined application of two appraisal tools, that is, the elaborated classification of intraoperative incidents on the basis of the Satava approach to surgical error evaluation together with the modified Accordion classification of postoperative complication, appears to be an effective tool for comprehensive assessment of surgical outcomes. This concept was validated in regard to various surgical procedures. Broad implementation of this approach will promote the development of surgical science and practice. PMID:23762627

  11. On decomposing stimulus and response waveforms in event-related potentials recordings.

    PubMed

    Yin, Gang; Zhang, Jun

    2011-06-01

    Event-related potentials (ERPs) reflect the brain activities related to specific behavioral events, and are obtained by averaging across many trial repetitions with individual trials aligned to the onset of a specific event, e.g., the onset of stimulus (s-aligned) or the onset of the behavioral response (r-aligned). However, the s-aligned and r-aligned ERP waveforms do not purely reflect, respectively, underlying stimulus (S-) or response (R-) component waveform, due to their cross-contaminations in the recorded ERP waveforms. Zhang [J. Neurosci. Methods, 80, pp. 49-63, 1998] proposed an algorithm to recover the pure S-component waveform and the pure R-component waveform from the s-aligned and r-aligned ERP average waveforms-however, due to the nature of this inverse problem, a direct solution is sensitive to noise that disproportionally affects low-frequency components, hindering the practical implementation of this algorithm. Here, we apply the Wiener deconvolution technique to deal with noise in input data, and investigate a Tikhonov regularization approach to obtain a stable solution that is robust against variances in the sampling of reaction-time distribution (when number of trials is low). Our method is demonstrated using data from a Go/NoGo experiment about image classification and recognition.

  12. Machine learning algorithms for meteorological event classification in the coastal area using in-situ data

    NASA Astrophysics Data System (ADS)

    Sokolov, Anton; Gengembre, Cyril; Dmitriev, Egor; Delbarre, Hervé

    2017-04-01

    The problem is considered of classification of local atmospheric meteorological events in the coastal area such as sea breezes, fogs and storms. The in-situ meteorological data as wind speed and direction, temperature, humidity and turbulence are used as predictors. Local atmospheric events of 2013-2014 were analysed manually to train classification algorithms in the coastal area of English Channel in Dunkirk (France). Then, ultrasonic anemometer data and LIDAR wind profiler data were used as predictors. A few algorithms were applied to determine meteorological events by local data such as a decision tree, the nearest neighbour classifier, a support vector machine. The comparison of classification algorithms was carried out, the most important predictors for each event type were determined. It was shown that in more than 80 percent of the cases machine learning algorithms detect the meteorological class correctly. We expect that this methodology could be applied also to classify events by climatological in-situ data or by modelling data. It allows estimating frequencies of each event in perspective of climate change.

  13. A Classification Scheme for Smart Manufacturing Systems’ Performance Metrics

    PubMed Central

    Lee, Y. Tina; Kumaraguru, Senthilkumaran; Jain, Sanjay; Robinson, Stefanie; Helu, Moneer; Hatim, Qais Y.; Rachuri, Sudarsan; Dornfeld, David; Saldana, Christopher J.; Kumara, Soundar

    2017-01-01

    This paper proposes a classification scheme for performance metrics for smart manufacturing systems. The discussion focuses on three such metrics: agility, asset utilization, and sustainability. For each of these metrics, we discuss classification themes, which we then use to develop a generalized classification scheme. In addition to the themes, we discuss a conceptual model that may form the basis for the information necessary for performance evaluations. Finally, we present future challenges in developing robust, performance-measurement systems for real-time, data-intensive enterprises. PMID:28785744

  14. Orientation selectivity based structure for texture classification

    NASA Astrophysics Data System (ADS)

    Wu, Jinjian; Lin, Weisi; Shi, Guangming; Zhang, Yazhong; Lu, Liu

    2014-10-01

    Local structure, e.g., local binary pattern (LBP), is widely used in texture classification. However, LBP is too sensitive to disturbance. In this paper, we introduce a novel structure for texture classification. Researches on cognitive neuroscience indicate that the primary visual cortex presents remarkable orientation selectivity for visual information extraction. Inspired by this, we investigate the orientation similarities among neighbor pixels, and propose an orientation selectivity based pattern for local structure description. Experimental results on texture classification demonstrate that the proposed structure descriptor is quite robust to disturbance.

  15. Shift-invariant discrete wavelet transform analysis for retinal image classification.

    PubMed

    Khademi, April; Krishnan, Sridhar

    2007-12-01

    This work involves retinal image classification and a novel analysis system was developed. From the compressed domain, the proposed scheme extracts textural features from wavelet coefficients, which describe the relative homogeneity of localized areas of the retinal images. Since the discrete wavelet transform (DWT) is shift-variant, a shift-invariant DWT was explored to ensure that a robust feature set was extracted. To combat the small database size, linear discriminant analysis classification was used with the leave one out method. 38 normal and 48 abnormal (exudates, large drusens, fine drusens, choroidal neovascularization, central vein and artery occlusion, histoplasmosis, arteriosclerotic retinopathy, hemi-central retinal vein occlusion and more) were used and a specificity of 79% and sensitivity of 85.4% were achieved (the average classification rate is 82.2%). The success of the system can be accounted to the highly robust feature set which included translation, scale and semi-rotational, features. Additionally, this technique is database independent since the features were specifically tuned to the pathologies of the human eye.

  16. Rare earth elements minimal harvest year variation facilitates robust geographical origin discrimination: The case of PDO "Fava Santorinis".

    PubMed

    Drivelos, Spiros A; Danezis, Georgios P; Haroutounian, Serkos A; Georgiou, Constantinos A

    2016-12-15

    This study examines the trace and rare earth elemental (REE) fingerprint variations of PDO (Protected Designation of Origin) "Fava Santorinis" over three consecutive harvesting years (2011-2013). Classification of samples in harvesting years was studied by performing discriminant analysis (DA), k nearest neighbours (κ-NN), partial least squares (PLS) analysis and probabilistic neural networks (PNN) using rare earth elements and trace metals determined using ICP-MS. DA performed better than κ-NN, producing 100% discrimination using trace elements and 79% using REEs. PLS was found to be superior to PNN, achieving 99% and 90% classification for trace and REEs, respectively, while PNN achieved 96% and 71% classification for trace and REEs, respectively. The information obtained using REEs did not enhance classification, indicating that REEs vary minimally per harvesting year, providing robust geographical origin discrimination. The results show that seasonal patterns can occur in the elemental composition of "Fava Santorinis", probably reflecting seasonality of climate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Voxel classification based airway tree segmentation

    NASA Astrophysics Data System (ADS)

    Lo, Pechin; de Bruijne, Marleen

    2008-03-01

    This paper presents a voxel classification based method for segmenting the human airway tree in volumetric computed tomography (CT) images. In contrast to standard methods that use only voxel intensities, our method uses a more complex appearance model based on a set of local image appearance features and Kth nearest neighbor (KNN) classification. The optimal set of features for classification is selected automatically from a large set of features describing the local image structure at several scales. The use of multiple features enables the appearance model to differentiate between airway tree voxels and other voxels of similar intensities in the lung, thus making the segmentation robust to pathologies such as emphysema. The classifier is trained on imperfect segmentations that can easily be obtained using region growing with a manual threshold selection. Experiments show that the proposed method results in a more robust segmentation that can grow into the smaller airway branches without leaking into emphysematous areas, and is able to segment many branches that are not present in the training set.

  18. CIFAR10-DVS: An Event-Stream Dataset for Object Classification

    PubMed Central

    Li, Hongmin; Liu, Hanchao; Ji, Xiangyang; Li, Guoqi; Shi, Luping

    2017-01-01

    Neuromorphic vision research requires high-quality and appropriately challenging event-stream datasets to support continuous improvement of algorithms and methods. However, creating event-stream datasets is a time-consuming task, which needs to be recorded using the neuromorphic cameras. Currently, there are limited event-stream datasets available. In this work, by utilizing the popular computer vision dataset CIFAR-10, we converted 10,000 frame-based images into 10,000 event streams using a dynamic vision sensor (DVS), providing an event-stream dataset of intermediate difficulty in 10 different classes, named as “CIFAR10-DVS.” The conversion of event-stream dataset was implemented by a repeated closed-loop smooth (RCLS) movement of frame-based images. Unlike the conversion of frame-based images by moving the camera, the image movement is more realistic in respect of its practical applications. The repeated closed-loop image movement generates rich local intensity changes in continuous time which are quantized by each pixel of the DVS camera to generate events. Furthermore, a performance benchmark in event-driven object classification is provided based on state-of-the-art classification algorithms. This work provides a large event-stream dataset and an initial benchmark for comparison, which may boost algorithm developments in even-driven pattern recognition and object classification. PMID:28611582

  19. CIFAR10-DVS: An Event-Stream Dataset for Object Classification.

    PubMed

    Li, Hongmin; Liu, Hanchao; Ji, Xiangyang; Li, Guoqi; Shi, Luping

    2017-01-01

    Neuromorphic vision research requires high-quality and appropriately challenging event-stream datasets to support continuous improvement of algorithms and methods. However, creating event-stream datasets is a time-consuming task, which needs to be recorded using the neuromorphic cameras. Currently, there are limited event-stream datasets available. In this work, by utilizing the popular computer vision dataset CIFAR-10, we converted 10,000 frame-based images into 10,000 event streams using a dynamic vision sensor (DVS), providing an event-stream dataset of intermediate difficulty in 10 different classes, named as "CIFAR10-DVS." The conversion of event-stream dataset was implemented by a repeated closed-loop smooth (RCLS) movement of frame-based images. Unlike the conversion of frame-based images by moving the camera, the image movement is more realistic in respect of its practical applications. The repeated closed-loop image movement generates rich local intensity changes in continuous time which are quantized by each pixel of the DVS camera to generate events. Furthermore, a performance benchmark in event-driven object classification is provided based on state-of-the-art classification algorithms. This work provides a large event-stream dataset and an initial benchmark for comparison, which may boost algorithm developments in even-driven pattern recognition and object classification.

  20. Identification and characterization of earthquake clusters: a comparative analysis for selected sequences in Italy

    NASA Astrophysics Data System (ADS)

    Peresan, Antonella; Gentili, Stefania

    2017-04-01

    Identification and statistical characterization of seismic clusters may provide useful insights about the features of seismic energy release and their relation to physical properties of the crust within a given region. Moreover, a number of studies based on spatio-temporal analysis of main-shocks occurrence require preliminary declustering of the earthquake catalogs. Since various methods, relying on different physical/statistical assumptions, may lead to diverse classifications of earthquakes into main events and related events, we aim to investigate the classification differences among different declustering techniques. Accordingly, a formal selection and comparative analysis of earthquake clusters is carried out for the most relevant earthquakes in North-Eastern Italy, as reported in the local OGS-CRS bulletins, compiled at the National Institute of Oceanography and Experimental Geophysics since 1977. The comparison is then extended to selected earthquake sequences associated with a different seismotectonic setting, namely to events that occurred in the region struck by the recent Central Italy destructive earthquakes, making use of INGV data. Various techniques, ranging from classical space-time windows methods to ad hoc manual identification of aftershocks, are applied for detection of earthquake clusters. In particular, a statistical method based on nearest-neighbor distances of events in space-time-energy domain, is considered. Results from clusters identification by the nearest-neighbor method turn out quite robust with respect to the time span of the input catalogue, as well as to minimum magnitude cutoff. The identified clusters for the largest events reported in North-Eastern Italy since 1977 are well consistent with those reported in earlier studies, which were aimed at detailed manual aftershocks identification. The study shows that the data-driven approach, based on the nearest-neighbor distances, can be satisfactorily applied to decompose the seismic catalog into background seismicity and individual sequences of earthquake clusters, also in areas characterized by moderate seismic activity, where the standard declustering techniques may turn out rather gross approximations. With these results acquired, the main statistical features of seismic clusters are explored, including complex interdependence of related events, with the aim to characterize the space-time patterns of earthquakes occurrence in North-Eastern Italy and capture their basic differences with Central Italy sequences.

  1. The Explosive Universe with Gaia

    NASA Astrophysics Data System (ADS)

    Wyrzykowski, Łukasz; Hodgkin, Simon T.; Blagorodnova, Nadejda; Belokurov, Vasily

    2014-01-01

    The Gaia mission will observe the entire sky for 5 years providing ultra-precise astrometric, photometric and spectroscopic measurements for a billion stars in the Galaxy. Hence, naturally, Gaia becomes an all-sky multi-epoch photometric survey, which will monitor and detect variability with millimag precision as well as new transient sources such as supernovae, novae, microlensing events, tidal disruption events, asteroids, among others. Gaia data-flow allows for quick detections of anomalies within 24-48h after the observation. Such near-real-time survey will be able to detect about 6000 supernovae brighter than 19 mag up to redshifts of Z 0.15. The on-board low-resolution (R 100) spectrograph will allow for early and robust classification of transients and minimise the false-alert rate, even providing the estimates on redshift for supernovae. Gaia will also offer a unique possibility for detecting astrometric shifts in microlensing events, which, combined with Gaia's and ground-based photometry, will provide unique mass measurements of lenses, constrains on the dark matter content in the Milky Way and possible detections of free floating black holes. Alerts from Gaia will be publicly available soon after the detection is verified and tested. First alerts are expected early in 2014 and those will be used for ground-based verification. All facilities are invited to join the verification and the follow-up effort. Alerts will be published on a web page, via Skyalert.org and via emailing list. Each alert will contain coordinates, Gaia light curve and low-resolution spectra, classification and cross-matching results. More information on the Gaia Science Alerts can be found here: http://www.ast.cam.ac.uk/ioa/wikis/gsawgwiki/ The full version of the poster is available here: http://www.ast.cam.ac.uk/ioa/wikis/gsawgwiki/images/1/13/GaiaAlertsPosterIAUS298.pdf

  2. Video mining using combinations of unsupervised and supervised learning techniques

    NASA Astrophysics Data System (ADS)

    Divakaran, Ajay; Miyahara, Koji; Peker, Kadir A.; Radhakrishnan, Regunathan; Xiong, Ziyou

    2003-12-01

    We discuss the meaning and significance of the video mining problem, and present our work on some aspects of video mining. A simple definition of video mining is unsupervised discovery of patterns in audio-visual content. Such purely unsupervised discovery is readily applicable to video surveillance as well as to consumer video browsing applications. We interpret video mining as content-adaptive or "blind" content processing, in which the first stage is content characterization and the second stage is event discovery based on the characterization obtained in stage 1. We discuss the target applications and find that using a purely unsupervised approach are too computationally complex to be implemented on our product platform. We then describe various combinations of unsupervised and supervised learning techniques that help discover patterns that are useful to the end-user of the application. We target consumer video browsing applications such as commercial message detection, sports highlights extraction etc. We employ both audio and video features. We find that supervised audio classification combined with unsupervised unusual event discovery enables accurate supervised detection of desired events. Our techniques are computationally simple and robust to common variations in production styles etc.

  3. Development of a Classification Scheme for Examining Adverse Events Associated with Medical Devices, Specifically the DaVinci Surgical System as Reported in the FDA MAUDE Database.

    PubMed

    Gupta, Priyanka; Schomburg, John; Krishna, Suprita; Adejoro, Oluwakayode; Wang, Qi; Marsh, Benjamin; Nguyen, Andrew; Genere, Juan Reyes; Self, Patrick; Lund, Erik; Konety, Badrinath R

    2017-01-01

    To examine the Manufacturer and User Facility Device Experience Database (MAUDE) database to capture adverse events experienced with the Da Vinci Surgical System. In addition, to design a standardized classification system to categorize the complications and machine failures associated with the device. Overall, 1,057,000 DaVinci procedures were performed in the United States between 2009 and 2012. Currently, no system exists for classifying and comparing device-related errors and complications with which to evaluate adverse events associated with the Da Vinci Surgical System. The MAUDE database was queried for events reports related to the DaVinci Surgical System between the years 2009 and 2012. A classification system was developed and tested among 14 robotic surgeons to associate a level of severity with each event and its relationship to the DaVinci Surgical System. Events were then classified according to this system and examined by using Chi-square analysis. Two thousand eight hundred thirty-seven events were identified, of which 34% were obstetrics and gynecology (Ob/Gyn); 19%, urology; 11%, other; and 36%, not specified. Our classification system had moderate agreement with a Kappa score of 0.52. Using our classification system, we identified 75% of the events as mild, 18% as moderate, 4% as severe, and 3% as life threatening or resulting in death. Seventy-seven percent were classified as definitely related to the device, 15% as possibly related, and 8% as not related. Urology procedures compared with Ob/Gyn were associated with more severe events (38% vs 26%, p < 0.0001). Energy instruments were associated with less severe events compared with the surgical system (8% vs 87%, p < 0.0001). Events that were definitely associated with the device tended to be less severe (81% vs 19%, p < 0.0001). Our classification system is a valid tool with moderate inter-rater agreement that can be used to better understand device-related adverse events. The majority of robotic related events were mild but associated with the device.

  4. Naïve and Robust: Class-Conditional Independence in Human Classification Learning

    ERIC Educational Resources Information Center

    Jarecki, Jana B.; Meder, Björn; Nelson, Jonathan D.

    2018-01-01

    Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class-conditional independence of features. This feature independence assumption simplifies the inference…

  5. Leveraging Long-term Seismic Catalogs for Automated Real-time Event Classification

    NASA Astrophysics Data System (ADS)

    Linville, L.; Draelos, T.; Pankow, K. L.; Young, C. J.; Alvarez, S.

    2017-12-01

    We investigate the use of labeled event types available through reviewed seismic catalogs to produce automated event labels on new incoming data from the crustal region spanned by the cataloged events. Using events cataloged by the University of Utah Seismograph Stations between October, 2012 and June, 2017, we calculate the spectrogram for a time window that spans the duration of each event as seen on individual stations, resulting in 110k event spectrograms (50% local earthquakes examples, 50% quarry blasts examples). Using 80% of the randomized example events ( 90k), a classifier is trained to distinguish between local earthquakes and quarry blasts. We explore variations of deep learning classifiers, incorporating elements of convolutional and recurrent neural networks. Using a single-layer Long Short Term Memory recurrent neural network, we achieve 92% accuracy on the classification task on the remaining 20K test examples. Leveraging the decisions from a group of stations that detected the same event by using the median of all classifications in the group increases the model accuracy to 96%. Additional data with equivalent processing from 500 more recently cataloged events (July, 2017), achieves the same accuracy as our test data on both single-station examples and multi-station medians, suggesting that the model can maintain accurate and stable classification rates on real-time automated events local to the University of Utah Seismograph Stations, with potentially minimal levels of re-training through time.

  6. Machine Learning Algorithms for Automatic Classification of Marmoset Vocalizations

    PubMed Central

    Ribeiro, Sidarta; Pereira, Danillo R.; Papa, João P.; de Albuquerque, Victor Hugo C.

    2016-01-01

    Automatic classification of vocalization type could potentially become a useful tool for acoustic the monitoring of captive colonies of highly vocal primates. However, for classification to be useful in practice, a reliable algorithm that can be successfully trained on small datasets is necessary. In this work, we consider seven different classification algorithms with the goal of finding a robust classifier that can be successfully trained on small datasets. We found good classification performance (accuracy > 0.83 and F1-score > 0.84) using the Optimum Path Forest classifier. Dataset and algorithms are made publicly available. PMID:27654941

  7. Comparison of different classification algorithms for underwater target discrimination.

    PubMed

    Li, Donghui; Azimi-Sadjadi, Mahmood R; Robinson, Marc

    2004-01-01

    Classification of underwater targets from the acoustic backscattered signals is considered here. Several different classification algorithms are tested and benchmarked not only for their performance but also to gain insight to the properties of the feature space. Results on a wideband 80-kHz acoustic backscattered data set collected for six different objects are presented in terms of the receiver operating characteristic (ROC) and robustness of the classifiers wrt reverberation.

  8. Geometry-based ensembles: toward a structural characterization of the classification boundary.

    PubMed

    Pujol, Oriol; Masip, David

    2009-06-01

    This paper introduces a novel binary discriminative learning technique based on the approximation of the nonlinear decision boundary by a piecewise linear smooth additive model. The decision border is geometrically defined by means of the characterizing boundary points-points that belong to the optimal boundary under a certain notion of robustness. Based on these points, a set of locally robust linear classifiers is defined and assembled by means of a Tikhonov regularized optimization procedure in an additive model to create a final lambda-smooth decision rule. As a result, a very simple and robust classifier with a strong geometrical meaning and nonlinear behavior is obtained. The simplicity of the method allows its extension to cope with some of today's machine learning challenges, such as online learning, large-scale learning or parallelization, with linear computational complexity. We validate our approach on the UCI database, comparing with several state-of-the-art classification techniques. Finally, we apply our technique in online and large-scale scenarios and in six real-life computer vision and pattern recognition problems: gender recognition based on face images, intravascular ultrasound tissue classification, speed traffic sign detection, Chagas' disease myocardial damage severity detection, old musical scores clef classification, and action recognition using 3D accelerometer data from a wearable device. The results are promising and this paper opens a line of research that deserves further attention.

  9. Robust Averaging of Covariances for EEG Recordings Classification in Motor Imagery Brain-Computer Interfaces.

    PubMed

    Uehara, Takashi; Sartori, Matteo; Tanaka, Toshihisa; Fiori, Simone

    2017-06-01

    The estimation of covariance matrices is of prime importance to analyze the distribution of multivariate signals. In motor imagery-based brain-computer interfaces (MI-BCI), covariance matrices play a central role in the extraction of features from recorded electroencephalograms (EEGs); therefore, correctly estimating covariance is crucial for EEG classification. This letter discusses algorithms to average sample covariance matrices (SCMs) for the selection of the reference matrix in tangent space mapping (TSM)-based MI-BCI. Tangent space mapping is a powerful method of feature extraction and strongly depends on the selection of a reference covariance matrix. In general, the observed signals may include outliers; therefore, taking the geometric mean of SCMs as the reference matrix may not be the best choice. In order to deal with the effects of outliers, robust estimators have to be used. In particular, we discuss and test the use of geometric medians and trimmed averages (defined on the basis of several metrics) as robust estimators. The main idea behind trimmed averages is to eliminate data that exhibit the largest distance from the average covariance calculated on the basis of all available data. The results of the experiments show that while the geometric medians show little differences from conventional methods in terms of classification accuracy in the classification of electroencephalographic recordings, the trimmed averages show significant improvement for all subjects.

  10. Robust evaluation of time series classification algorithms for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Harvey, Dustin Y.; Worden, Keith; Todd, Michael D.

    2014-03-01

    Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and mechanical infrastructure through analysis of structural response measurements. The supervised learning methodology for data-driven SHM involves computation of low-dimensional, damage-sensitive features from raw measurement data that are then used in conjunction with machine learning algorithms to detect, classify, and quantify damage states. However, these systems often suffer from performance degradation in real-world applications due to varying operational and environmental conditions. Probabilistic approaches to robust SHM system design suffer from incomplete knowledge of all conditions a system will experience over its lifetime. Info-gap decision theory enables nonprobabilistic evaluation of the robustness of competing models and systems in a variety of decision making applications. Previous work employed info-gap models to handle feature uncertainty when selecting various components of a supervised learning system, namely features from a pre-selected family and classifiers. In this work, the info-gap framework is extended to robust feature design and classifier selection for general time series classification through an efficient, interval arithmetic implementation of an info-gap data model. Experimental results are presented for a damage type classification problem on a ball bearing in a rotating machine. The info-gap framework in conjunction with an evolutionary feature design system allows for fully automated design of a time series classifier to meet performance requirements under maximum allowable uncertainty.

  11. Incorporation of support vector machines in the LIBS toolbox for sensitive and robust classification amidst unexpected sample and system variability

    PubMed Central

    ChariDingari, Narahara; Barman, Ishan; Myakalwar, Ashwin Kumar; Tewari, Surya P.; Kumar, G. Manoj

    2012-01-01

    Despite the intrinsic elemental analysis capability and lack of sample preparation requirements, laser-induced breakdown spectroscopy (LIBS) has not been extensively used for real world applications, e.g. quality assurance and process monitoring. Specifically, variability in sample, system and experimental parameters in LIBS studies present a substantive hurdle for robust classification, even when standard multivariate chemometric techniques are used for analysis. Considering pharmaceutical sample investigation as an example, we propose the use of support vector machines (SVM) as a non-linear classification method over conventional linear techniques such as soft independent modeling of class analogy (SIMCA) and partial least-squares discriminant analysis (PLS-DA) for discrimination based on LIBS measurements. Using over-the-counter pharmaceutical samples, we demonstrate that application of SVM enables statistically significant improvements in prospective classification accuracy (sensitivity), due to its ability to address variability in LIBS sample ablation and plasma self-absorption behavior. Furthermore, our results reveal that SVM provides nearly 10% improvement in correct allocation rate and a concomitant reduction in misclassification rates of 75% (cf. PLS-DA) and 80% (cf. SIMCA)-when measurements from samples not included in the training set are incorporated in the test data – highlighting its robustness. While further studies on a wider matrix of sample types performed using different LIBS systems is needed to fully characterize the capability of SVM to provide superior predictions, we anticipate that the improved sensitivity and robustness observed here will facilitate application of the proposed LIBS-SVM toolbox for screening drugs and detecting counterfeit samples as well as in related areas of forensic and biological sample analysis. PMID:22292496

  12. Incorporation of support vector machines in the LIBS toolbox for sensitive and robust classification amidst unexpected sample and system variability.

    PubMed

    Dingari, Narahara Chari; Barman, Ishan; Myakalwar, Ashwin Kumar; Tewari, Surya P; Kumar Gundawar, Manoj

    2012-03-20

    Despite the intrinsic elemental analysis capability and lack of sample preparation requirements, laser-induced breakdown spectroscopy (LIBS) has not been extensively used for real-world applications, e.g., quality assurance and process monitoring. Specifically, variability in sample, system, and experimental parameters in LIBS studies present a substantive hurdle for robust classification, even when standard multivariate chemometric techniques are used for analysis. Considering pharmaceutical sample investigation as an example, we propose the use of support vector machines (SVM) as a nonlinear classification method over conventional linear techniques such as soft independent modeling of class analogy (SIMCA) and partial least-squares discriminant analysis (PLS-DA) for discrimination based on LIBS measurements. Using over-the-counter pharmaceutical samples, we demonstrate that the application of SVM enables statistically significant improvements in prospective classification accuracy (sensitivity), because of its ability to address variability in LIBS sample ablation and plasma self-absorption behavior. Furthermore, our results reveal that SVM provides nearly 10% improvement in correct allocation rate and a concomitant reduction in misclassification rates of 75% (cf. PLS-DA) and 80% (cf. SIMCA)-when measurements from samples not included in the training set are incorporated in the test data-highlighting its robustness. While further studies on a wider matrix of sample types performed using different LIBS systems is needed to fully characterize the capability of SVM to provide superior predictions, we anticipate that the improved sensitivity and robustness observed here will facilitate application of the proposed LIBS-SVM toolbox for screening drugs and detecting counterfeit samples, as well as in related areas of forensic and biological sample analysis.

  13. Multisample cross-validation of a model of childhood posttraumatic stress disorder symptomatology.

    PubMed

    Anthony, Jason L; Lonigan, Christopher J; Vernberg, Eric M; Greca, Annette M La; Silverman, Wendy K; Prinstein, Mitchell J

    2005-12-01

    This study is the latest advancement of our research aimed at best characterizing children's posttraumatic stress reactions. In a previous study, we compared existing nosologic and empirical models of PTSD dimensionality and determined the superior model was a hierarchical one with three symptom clusters (Intrusion/Active Avoidance, Numbing/Passive Avoidance, and Arousal; Anthony, Lonigan, & Hecht, 1999). In this study, we cross-validate this model in two populations. Participants were 396 fifth graders who were exposed to either Hurricane Andrew or Hurricane Hugo. Multisample confirmatory factor analysis demonstrated the model's factorial invariance across populations who experienced traumatic events that differed in severity. These results show the model's robustness to characterize children's posttraumatic stress reactions. Implications for diagnosis, classification criteria, and an empirically supported theory of PTSD are discussed.

  14. Experience with Event Timing Does not Alter Emergent Timing: Further Evidence for Robustness of Event and Emergent Timing.

    PubMed

    Pope, Megan A; Studenka, Breanna E

    2018-02-15

    Although, event and emergent timings are thought of as mutually exclusive, significant correlations between tapping and circle drawing (Baer, Thibodeau, Gralnick, Li, & Penhune, 2013 ; Studenka, Zelaznik, & Balasubramaniam, 2012 ; Zelaznik & Rosenbaum, 2010 ) suggest that emergent timing may not be as robust as once thought. We aimed to test this hypothesis in both a younger (18-25) and older (55-100) population. Participants performed one block of circle drawing as a baseline, then six blocks of tapping, followed by circle drawing. We examined the use of event timing. Our hypothesis that acute experience with event timing would bias an individual to use event timing during an emergent task was not supported. We, instead, support the robustness of event and emergent timing as independent timing modes.

  15. Diagnosis of streamflow prediction skills in Oregon using Hydrologic Landscape Classification

    EPA Science Inventory

    A complete understanding of why rainfall-runoff models provide good streamflow predictions at catchments in some regions, but fail to do so in other regions, has still not been achieved. Here, we argue that a hydrologic classification system is a robust conceptual tool that is w...

  16. Where and why do models fail? Perspectives from Oregon Hydrologic Landscape classification

    EPA Science Inventory

    A complete understanding of why rainfall-runoff models provide good streamflow predictions at catchments in some regions, but fail to do so in other regions, has still not been achieved. Here, we argue that a hydrologic classification system is a robust conceptual tool that is w...

  17. Robust Quantum Computing using Molecules with Switchable Dipole

    DTIC Science & Technology

    2010-06-15

    REPORT Robust quantum computing using molecules with switchable dipole 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: Of the many systems studied to...Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS Ultracold polar molecules, quantum computing , phase gates...From - To) 30-Aug-2006 Standard Form 298 (Rev 8/98) Prescribed by ANSI Std. Z39.18 - 31-Aug-2009 Robust quantum computing using molecules with

  18. Robust feature detection and local classification for surfaces based on moment analysis.

    PubMed

    Clarenz, Ulrich; Rumpf, Martin; Telea, Alexandru

    2004-01-01

    The stable local classification of discrete surfaces with respect to features such as edges and corners or concave and convex regions, respectively, is as quite difficult as well as indispensable for many surface processing applications. Usually, the feature detection is done via a local curvature analysis. If concerned with large triangular and irregular grids, e.g., generated via a marching cube algorithm, the detectors are tedious to treat and a robust classification is hard to achieve. Here, a local classification method on surfaces is presented which avoids the evaluation of discretized curvature quantities. Moreover, it provides an indicator for smoothness of a given discrete surface and comes together with a built-in multiscale. The proposed classification tool is based on local zero and first moments on the discrete surface. The corresponding integral quantities are stable to compute and they give less noisy results compared to discrete curvature quantities. The stencil width for the integration of the moments turns out to be the scale parameter. Prospective surface processing applications are the segmentation on surfaces, surface comparison, and matching and surface modeling. Here, a method for feature preserving fairing of surfaces is discussed to underline the applicability of the presented approach.

  19. Unified framework for triaxial accelerometer-based fall event detection and classification using cumulants and hierarchical decision tree classifier.

    PubMed

    Kambhampati, Satya Samyukta; Singh, Vishal; Manikandan, M Sabarimalai; Ramkumar, Barathram

    2015-08-01

    In this Letter, the authors present a unified framework for fall event detection and classification using the cumulants extracted from the acceleration (ACC) signals acquired using a single waist-mounted triaxial accelerometer. The main objective of this Letter is to find suitable representative cumulants and classifiers in effectively detecting and classifying different types of fall and non-fall events. It was discovered that the first level of the proposed hierarchical decision tree algorithm implements fall detection using fifth-order cumulants and support vector machine (SVM) classifier. In the second level, the fall event classification algorithm uses the fifth-order cumulants and SVM. Finally, human activity classification is performed using the second-order cumulants and SVM. The detection and classification results are compared with those of the decision tree, naive Bayes, multilayer perceptron and SVM classifiers with different types of time-domain features including the second-, third-, fourth- and fifth-order cumulants and the signal magnitude vector and signal magnitude area. The experimental results demonstrate that the second- and fifth-order cumulant features and SVM classifier can achieve optimal detection and classification rates of above 95%, as well as the lowest false alarm rate of 1.03%.

  20. Detecting single-trial EEG evoked potential using a wavelet domain linear mixed model: application to error potentials classification.

    PubMed

    Spinnato, J; Roubaud, M-C; Burle, B; Torrésani, B

    2015-06-01

    The main goal of this work is to develop a model for multisensor signals, such as magnetoencephalography or electroencephalography (EEG) signals that account for inter-trial variability, suitable for corresponding binary classification problems. An important constraint is that the model be simple enough to handle small size and unbalanced datasets, as often encountered in BCI-type experiments. The method involves the linear mixed effects statistical model, wavelet transform, and spatial filtering, and aims at the characterization of localized discriminant features in multisensor signals. After discrete wavelet transform and spatial filtering, a projection onto the relevant wavelet and spatial channels subspaces is used for dimension reduction. The projected signals are then decomposed as the sum of a signal of interest (i.e., discriminant) and background noise, using a very simple Gaussian linear mixed model. Thanks to the simplicity of the model, the corresponding parameter estimation problem is simplified. Robust estimates of class-covariance matrices are obtained from small sample sizes and an effective Bayes plug-in classifier is derived. The approach is applied to the detection of error potentials in multichannel EEG data in a very unbalanced situation (detection of rare events). Classification results prove the relevance of the proposed approach in such a context. The combination of the linear mixed model, wavelet transform and spatial filtering for EEG classification is, to the best of our knowledge, an original approach, which is proven to be effective. This paper improves upon earlier results on similar problems, and the three main ingredients all play an important role.

  1. Robustness analysis of a green chemistry-based model for the classification of silver nanoparticles synthesis processes

    EPA Science Inventory

    This paper proposes a robustness analysis based on Multiple Criteria Decision Aiding (MCDA). The ensuing model was used to assess the implementation of green chemistry principles in the synthesis of silver nanoparticles. Its recommendations were also compared to an earlier develo...

  2. New FIGO and Swedish intrapartum cardiotocography classification systems incorporated in the fetal ECG ST analysis (STAN) interpretation algorithm: agreements and discrepancies in cardiotocography classification and evaluation of significant ST events.

    PubMed

    Olofsson, Per; Norén, Håkan; Carlsson, Ann

    2018-02-01

    The updated intrapartum cardiotocography (CTG) classification system by FIGO in 2015 (FIGO2015) and the FIGO2015-approached classification by the Swedish Society of Obstetricians and Gynecologist in 2017 (SSOG2017) are not harmonized with the fetal ECG ST analysis (STAN) algorithm from 2007 (STAN2007). The study aimed to reveal homogeneity and agreement between the systems in classifying CTG and ST events, and relate them to maternal and perinatal outcomes. Among CTG traces with ST events, 100 traces originally classified as normal, 100 as suspicious and 100 as pathological were randomly selected from a STAN database and classified by two experts in consensus. Homogeneity and agreement statistics between the CTG classifications were performed. Maternal and perinatal outcomes were evaluated in cases with clinically hidden ST data (n = 151). A two-tailed p < 0.05 was regarded as significant. For CTG classes, the heterogeneity was significant between the old and new systems, and agreements were moderate to strong (proportion of agreement, kappa index 0.70-0.86). Between the new classifications, heterogeneity was significant and agreements strong (0.90, 0.92). For significant ST events, heterogeneities were significant and agreements moderate to almost perfect (STAN2007 vs. FIGO2015 0.86, 0.72; STAN2007 vs. SSOG2017 0.92, 0.84; FIGO2015 vs. SSOG2017 0.94, 0.87). Significant ST events occurred more often combined with STAN2007 than with FIGO2015 classification, but not with SSOG2017; correct identification of adverse outcomes was not significantly different between the systems. There are discrepancies in the classification of CTG patterns and significant ST events between the old and new systems. The clinical relevance of the findings remains to be shown. © 2017 The Authors. Acta Obstetricia et Gynecologica Scandinavica published by John Wiley & Sons Ltd on behalf of Nordic Federation of Societies of Obstetrics and Gynecology (NFOG).

  3. Is overall similarity classification less effortful than single-dimension classification?

    PubMed

    Wills, Andy J; Milton, Fraser; Longmore, Christopher A; Hester, Sarah; Robinson, Jo

    2013-01-01

    It is sometimes argued that the implementation of an overall similarity classification is less effortful than the implementation of a single-dimension classification. In the current article, we argue that the evidence securely in support of this view is limited, and report additional evidence in support of the opposite proposition--overall similarity classification is more effortful than single-dimension classification. Using a match-to-standards procedure, Experiments 1A, 1B and 2 demonstrate that concurrent load reduces the prevalence of overall similarity classification, and that this effect is robust to changes in the concurrent load task employed, the level of time pressure experienced, and the short-term memory requirements of the classification task. Experiment 3 demonstrates that participants who produced overall similarity classifications from the outset have larger working memory capacities than those who produced single-dimension classifications initially, and Experiment 4 demonstrates that instructions to respond meticulously increase the prevalence of overall similarity classification.

  4. Common spatial pattern combined with kernel linear discriminate and generalized radial basis function for motor imagery-based brain computer interface applications

    NASA Astrophysics Data System (ADS)

    Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko

    2018-04-01

    Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.

  5. Resolving anthropogenic aerosol pollution types - deconvolution and exploratory classification of pollution events

    NASA Astrophysics Data System (ADS)

    Äijälä, Mikko; Heikkinen, Liine; Fröhlich, Roman; Canonaco, Francesco; Prévôt, André S. H.; Junninen, Heikki; Petäjä, Tuukka; Kulmala, Markku; Worsnop, Douglas; Ehn, Mikael

    2017-03-01

    Mass spectrometric measurements commonly yield data on hundreds of variables over thousands of points in time. Refining and synthesizing this raw data into chemical information necessitates the use of advanced, statistics-based data analytical techniques. In the field of analytical aerosol chemistry, statistical, dimensionality reductive methods have become widespread in the last decade, yet comparable advanced chemometric techniques for data classification and identification remain marginal. Here we present an example of combining data dimensionality reduction (factorization) with exploratory classification (clustering), and show that the results cannot only reproduce and corroborate earlier findings, but also complement and broaden our current perspectives on aerosol chemical classification. We find that applying positive matrix factorization to extract spectral characteristics of the organic component of air pollution plumes, together with an unsupervised clustering algorithm, k-means+ + , for classification, reproduces classical organic aerosol speciation schemes. Applying appropriately chosen metrics for spectral dissimilarity along with optimized data weighting, the source-specific pollution characteristics can be statistically resolved even for spectrally very similar aerosol types, such as different combustion-related anthropogenic aerosol species and atmospheric aerosols with similar degree of oxidation. In addition to the typical oxidation level and source-driven aerosol classification, we were also able to classify and characterize outlier groups that would likely be disregarded in a more conventional analysis. Evaluating solution quality for the classification also provides means to assess the performance of mass spectral similarity metrics and optimize weighting for mass spectral variables. This facilitates algorithm-based evaluation of aerosol spectra, which may prove invaluable for future development of automatic methods for spectra identification and classification. Robust, statistics-based results and data visualizations also provide important clues to a human analyst on the existence and chemical interpretation of data structures. Applying these methods to a test set of data, aerosol mass spectrometric data of organic aerosol from a boreal forest site, yielded five to seven different recurring pollution types from various sources, including traffic, cooking, biomass burning and nearby sawmills. Additionally, three distinct, minor pollution types were discovered and identified as amine-dominated aerosols.

  6. Aperiodic Robust Model Predictive Control for Constrained Continuous-Time Nonlinear Systems: An Event-Triggered Approach.

    PubMed

    Liu, Changxin; Gao, Jian; Li, Huiping; Xu, Demin

    2018-05-01

    The event-triggered control is a promising solution to cyber-physical systems, such as networked control systems, multiagent systems, and large-scale intelligent systems. In this paper, we propose an event-triggered model predictive control (MPC) scheme for constrained continuous-time nonlinear systems with bounded disturbances. First, a time-varying tightened state constraint is computed to achieve robust constraint satisfaction, and an event-triggered scheduling strategy is designed in the framework of dual-mode MPC. Second, the sufficient conditions for ensuring feasibility and closed-loop robust stability are developed, respectively. We show that robust stability can be ensured and communication load can be reduced with the proposed MPC algorithm. Finally, numerical simulations and comparison studies are performed to verify the theoretical results.

  7. Teaching sexual history-taking skills using the Sexual Events Classification System.

    PubMed

    Fidler, Donald C; Petri, Justin Daniel; Chapman, Mark

    2010-01-01

    The authors review the literature about educational programs for teaching sexual history-taking skills and describe novel techniques for teaching these skills. Psychiatric residents enrolled in a brief sexual history-taking course that included instruction on the Sexual Events Classification System, feedback on residents' video-recorded interviews with simulated patients, discussion of videos that simulated bad interviews, simulated patients, and a competency scoring form to score a video of a simulated interview. After the course, residents completed an anonymous survey to assess the usefulness of the experience. After the course, most residents felt more comfortable taking sexual histories. They described the Sexual Events Classification System and simulated interviews as practical methods for teaching sexual history-taking skills. The Sexual Events Classification System and simulated patient experiences may serve as a practical model for teaching sexual history-taking skills to general psychiatric residents.

  8. Robust non-parametric one-sample tests for the analysis of recurrent events.

    PubMed

    Rebora, Paola; Galimberti, Stefania; Valsecchi, Maria Grazia

    2010-12-30

    One-sample non-parametric tests are proposed here for inference on recurring events. The focus is on the marginal mean function of events and the basis for inference is the standardized distance between the observed and the expected number of events under a specified reference rate. Different weights are considered in order to account for various types of alternative hypotheses on the mean function of the recurrent events process. A robust version and a stratified version of the test are also proposed. The performance of these tests was investigated through simulation studies under various underlying event generation processes, such as homogeneous and nonhomogeneous Poisson processes, autoregressive and renewal processes, with and without frailty effects. The robust versions of the test have been shown to be suitable in a wide variety of event generating processes. The motivating context is a study on gene therapy in a very rare immunodeficiency in children, where a major end-point is the recurrence of severe infections. Robust non-parametric one-sample tests for recurrent events can be useful to assess efficacy and especially safety in non-randomized studies or in epidemiological studies for comparison with a standard population. Copyright © 2010 John Wiley & Sons, Ltd.

  9. Defining and classifying medical error: lessons for patient safety reporting systems.

    PubMed

    Tamuz, M; Thomas, E J; Franchois, K E

    2004-02-01

    It is important for healthcare providers to report safety related events, but little attention has been paid to how the definition and classification of events affects a hospital's ability to learn from its experience. To examine how the definition and classification of safety related events influences key organizational routines for gathering information, allocating incentives, and analyzing event reporting data. In semi-structured interviews, professional staff and administrators in a tertiary care teaching hospital and its pharmacy were asked to describe the existing programs designed to monitor medication safety, including the reporting systems. With a focus primarily on the pharmacy staff, interviews were audio recorded, transcribed, and analyzed using qualitative research methods. Eighty six interviews were conducted, including 36 in the hospital pharmacy. Examples are presented which show that: (1) the definition of an event could lead to under-reporting; (2) the classification of a medication error into alternative categories can influence the perceived incentives and disincentives for incident reporting; (3) event classification can enhance or impede organizational routines for data analysis and learning; and (4) routines that promote organizational learning within the pharmacy can reduce the flow of medication error data to the hospital. These findings from one hospital raise important practical and research questions about how reporting systems are influenced by the definition and classification of safety related events. By understanding more clearly how hospitals define and classify their experience, we may improve our capacity to learn and ultimately improve patient safety.

  10. Classification and evaluation of the documentary-recorded storm events in the Annals of the Choson Dynasty (1392-1910), Korea

    NASA Astrophysics Data System (ADS)

    Yoo, Chulsang; Park, Minkyu; Kim, Hyeon Jun; Choi, Juhee; Sin, Jiye; Jun, Changhyun

    2015-01-01

    In this study, the analysis of documentary records on the storm events in the Annals of the Choson Dynasty, covering the entire period of 519 years from 1392 to 1910, was carried out. By applying various key words related to storm events, a total of 556 documentary records could be identified. The main objective of this study was to develop rules of classification for the documentary records on the storm events in the Annals of the Choson Dynasty. The results were also compared with the rainfall data of the traditional Korean rain gauge, named Chukwooki, which are available from 1777 to 1910 (about 130 years). The analysis is organized as follows. First, the frequency of the documents, their length, comments about the size of the inundated area, the number of casualties, the number of property losses, and the size of the countermeasures, etc. were considered to determine the magnitude of the events. To this end, rules of classification of the storm events are developed. Cases in which the word 'disaster' was used along with detailed information about the casualties and property damages, were classified as high-level storm events. The high-level storm events were additionally sub-categorized into catastrophic, extreme, and severe events. Second, by applying the developed rules of classification, a total of 326 events were identified as high-level storm events during the 519 years of the Choson Dynasty. Among these high-level storm events, only 19 events were then classified as the catastrophic ones, 106 events as the extreme ones, and 201 events as the severe ones. The mean return period of these storm events was found to be about 30 years for the catastrophic events, 5 years for the extreme events, and 2-3 years for the severe events. Third, the classification results were verified considering the records of the traditional Korean rain gauge; it was found that the catastrophic events are strongly distinguished from other events with a mean total rainfall and a storm duration equal to 439.8 mm and 49.3 h, respectively. The return period of these catastrophic events was also estimated to be in the range 100-500 years.

  11. Compositional classification and sedimentological interpretation of the laminated lacustrine sediments at Baumkrichen (Western Austria) using XRF core scanning data

    NASA Astrophysics Data System (ADS)

    Barrett, Samuel; Tjallingii, Rik; Bloemsma, Menno; Brauer, Achim; Starnberger, Reinhard; Spötl, Christoph; Dulski, Peter

    2015-04-01

    The outcrop at Baumkirchen (Austria) encloses part of a unique sequence of laminated lacustrine sediments deposited during the last glacial cycle. A ~250m long composite sediment record recovered at this location now continuously covers the periods ~33 to ~45 ka BP (MIS 3) and ~59 to ~73 ka BP (MIS 4), which are separated by a hiatus. The well-laminated (mm-cm scale) and almost entirely clastic sediments reveal alternations of clayey silt and medium silt to very-fine sand layers. Although radiocarbon and optically stimulated luminescence (OSL) dating provide a robust chronology, accurate dating of the sediment laminations appears to be problematic due to very high sedimentation rates (3-8 cm/yr). X-ray fluorescence (XRF) core scanning provided a detailed ~150m long record of compositional changes of the sediments at Baumkirchen. Changes in the sediments are subtle and classification into different facies based on individual elements is therefore subjective. We applied a statistically robust clustering analysis to provide an objective compositional classification without prior knowledge, based on XRF measurements for 15 analysed elements (all those with an acceptable signal-noise ratio: Zr, Sr, Ca, Mn, Cu, Zn, Rb, Ni, Fe, K, Cr, V, Si, Ba, T). The clustering analysis indicates a distinct compositional change between sediments deposited below and above the stratigraphic hiatus, but also differentiates between individual different laminae. Preliminary results suggest variations in the sequence are largely controlled by the relative occurrence of different kinds of sediment represented by different clusters. Three clusters identify well-laminated sediments, visually similar in appearance, each dominated by an anti-correlation between Ca and one or more of the detrital elements K, Zr, Ti, Si and Fe. Two of these clusters occur throughout the entire sequence, one frequently and the other restricted to short sections, while the third occurs almost exclusively below the hiatus, indicating a geochemically distinct component that possibly represents a specific sediment source. In a similar manner, three other clusters identify event layers with different compositions of which two occur exclusively above the hiatus and one exclusively below. The variations in the occurrence of these clusters revealing distinct event layers suggest variations in dominant sediment source both above and below the hiatus and within the section above it. More detailed comparisons between compositional variations of the individual clusters obtained from biplots and microscopic observations on thin sections, grain-size analyses, and mineralogical analyses are needed to further differentiate between sediment sources and transport mechanisms.

  12. Workshop on Algorithms for Time-Series Analysis

    NASA Astrophysics Data System (ADS)

    Protopapas, Pavlos

    2012-04-01

    abstract-type="normal">SummaryThis Workshop covered the four major subjects listed below in two 90-minute sessions. Each talk or tutorial allowed questions, and concluded with a discussion. Classification: Automatic classification using machine-learning methods is becoming a standard in surveys that generate large datasets. Ashish Mahabal (Caltech) reviewed various methods, and presented examples of several applications. Time-Series Modelling: Suzanne Aigrain (Oxford University) discussed autoregressive models and multivariate approaches such as Gaussian Processes. Meta-classification/mixture of expert models: Karim Pichara (Pontificia Universidad Católica, Chile) described the substantial promise which machine-learning classification methods are now showing in automatic classification, and discussed how the various methods can be combined together. Event Detection: Pavlos Protopapas (Harvard) addressed methods of fast identification of events with low signal-to-noise ratios, enlarging on the characterization and statistical issues of low signal-to-noise ratios and rare events.

  13. A robust dataset-agnostic heart disease classifier from Phonocardiogram.

    PubMed

    Banerjee, Rohan; Dutta Choudhury, Anirban; Deshpande, Parijat; Bhattacharya, Sakyajit; Pal, Arpan; Mandana, K M

    2017-07-01

    Automatic classification of normal and abnormal heart sounds is a popular area of research. However, building a robust algorithm unaffected by signal quality and patient demography is a challenge. In this paper we have analysed a wide list of Phonocardiogram (PCG) features in time and frequency domain along with morphological and statistical features to construct a robust and discriminative feature set for dataset-agnostic classification of normal and cardiac patients. The large and open access database, made available in Physionet 2016 challenge was used for feature selection, internal validation and creation of training models. A second dataset of 41 PCG segments, collected using our in-house smart phone based digital stethoscope from an Indian hospital was used for performance evaluation. Our proposed methodology yielded sensitivity and specificity scores of 0.76 and 0.75 respectively on the test dataset in classifying cardiovascular diseases. The methodology also outperformed three popular prior art approaches, when applied on the same dataset.

  14. Hierarchical classification method and its application in shape representation

    NASA Astrophysics Data System (ADS)

    Ireton, M. A.; Oakley, John P.; Xydeas, Costas S.

    1992-04-01

    In this paper we describe a technique for performing shaped-based content retrieval of images from a large database. In order to be able to formulate such user-generated queries about visual objects, we have developed an hierarchical classification technique. This hierarchical classification technique enables similarity matching between objects, with the position in the hierarchy signifying the level of generality to be used in the query. The classification technique is unsupervised, robust, and general; it can be applied to any suitable parameter set. To establish the potential of this classifier for aiding visual querying, we have applied it to the classification of the 2-D outlines of leaves.

  15. Passive polarimetric imagery-based material classification robust to illumination source position and viewpoint.

    PubMed

    Thilak Krishna, Thilakam Vimal; Creusere, Charles D; Voelz, David G

    2011-01-01

    Polarization, a property of light that conveys information about the transverse electric field orientation, complements other attributes of electromagnetic radiation such as intensity and frequency. Using multiple passive polarimetric images, we develop an iterative, model-based approach to estimate the complex index of refraction and apply it to target classification.

  16. Label-noise resistant logistic regression for functional data classification with an application to Alzheimer's disease study.

    PubMed

    Lee, Seokho; Shin, Hyejin; Lee, Sang Han

    2016-12-01

    Alzheimer's disease (AD) is usually diagnosed by clinicians through cognitive and functional performance test with a potential risk of misdiagnosis. Since the progression of AD is known to cause structural changes in the corpus callosum (CC), the CC thickness can be used as a functional covariate in AD classification problem for a diagnosis. However, misclassified class labels negatively impact the classification performance. Motivated by AD-CC association studies, we propose a logistic regression for functional data classification that is robust to misdiagnosis or label noise. Specifically, our logistic regression model is constructed by adopting individual intercepts to functional logistic regression model. This approach enables to indicate which observations are possibly mislabeled and also lead to a robust and efficient classifier. An effective algorithm using MM algorithm provides simple closed-form update formulas. We test our method using synthetic datasets to demonstrate its superiority over an existing method, and apply it to differentiating patients with AD from healthy normals based on CC from MRI. © 2016, The International Biometric Society.

  17. Automatic Screening and Grading of Age-Related Macular Degeneration from Texture Analysis of Fundus Images

    PubMed Central

    Phan, Thanh Vân; Seoud, Lama; Chakor, Hadi; Cheriet, Farida

    2016-01-01

    Age-related macular degeneration (AMD) is a disease which causes visual deficiency and irreversible blindness to the elderly. In this paper, an automatic classification method for AMD is proposed to perform robust and reproducible assessments in a telemedicine context. First, a study was carried out to highlight the most relevant features for AMD characterization based on texture, color, and visual context in fundus images. A support vector machine and a random forest were used to classify images according to the different AMD stages following the AREDS protocol and to evaluate the features' relevance. Experiments were conducted on a database of 279 fundus images coming from a telemedicine platform. The results demonstrate that local binary patterns in multiresolution are the most relevant for AMD classification, regardless of the classifier used. Depending on the classification task, our method achieves promising performances with areas under the ROC curve between 0.739 and 0.874 for screening and between 0.469 and 0.685 for grading. Moreover, the proposed automatic AMD classification system is robust with respect to image quality. PMID:27190636

  18. Using a Discrete-Choice Experiment Involving Cost to Value a Classification System Measuring the Quality-of-Life Impact of Self-Management for Diabetes.

    PubMed

    Rowen, Donna; Stevens, Katherine; Labeit, Alexander; Elliott, Jackie; Mulhern, Brendan; Carlton, Jill; Basarir, Hasan; Ratcliffe, Julie; Brazier, John

    2018-01-01

    To describe the use of a novel approach in health valuation of a discrete-choice experiment (DCE) including a cost attribute to value a recently developed classification system for measuring the quality-of-life impact (both health and treatment experience) of self-management for diabetes. A large online survey was conducted using DCE with cost on UK respondents from the general population (n = 1497) and individuals with diabetes (n = 405). The data were modeled using a conditional logit model with robust standard errors. The marginal rate of substitution was used to generate willingness-to-pay (WTP) estimates for every state defined by the classification system. Robustness of results was assessed by including interaction effects for household income. There were some logical inconsistencies and insignificant coefficients for the milder levels of some attributes. There were some differences in the rank ordering of different attributes for the general population and diabetic patients. The WTP to avoid the most severe state was £1118.53 per month for the general population and £2356.02 per month for the diabetic patient population. The results were largely robust. Health and self-management can be valued in a single classification system using DCE with cost. The marginal rate of substitution for key attributes can be used to inform cost-benefit analysis of self-management interventions in diabetes using results from clinical studies in which this new classification system has been applied. The method shows promise, but found large WTP estimates exceeding the cost levels used in the survey. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  19. Current Assessment and Classification of Suicidal Phenomena using the FDA 2012 Draft Guidance Document on Suicide Assessment: A Critical Review.

    PubMed

    Sheehan, David V; Giddens, Jennifer M; Sheehan, Kathy Harnett

    2014-09-01

    Standard international classification criteria require that classification categories be comprehensive to avoid type II error. Categories should be mutually exclusive and definitions should be clear and unambiguous (to avoid type I and type II errors). In addition, the classification system should be robust enough to last over time and provide comparability between data collections. This article was designed to evaluate the extent to which the classification system contained in the United States Food and Drug Administration 2012 Draft Guidance for the prospective assessment and classification of suicidal ideation and behavior in clinical trials meets these criteria. A critical review is used to assess the extent to which the proposed categories contained in the Food and Drug Administration 2012 Draft Guidance are comprehensive, unambiguous, and robust. Assumptions that underlie the classification system are also explored. The Food and Drug Administration classification system contained in the 2012 Draft Guidance does not capture the full range of suicidal ideation and behavior (type II error). Definitions, moreover, are frequently ambiguous (susceptible to multiple interpretations), and the potential for misclassification (type I and type II errors) is compounded by frequent mismatches in category titles and definitions. These issues have the potential to compromise data comparability within clinical trial sites, across sites, and over time. These problems need to be remedied because of the potential for flawed data output and consequent threats to public health, to research on the safety of medications, and to the search for effective medication treatments for suicidality.

  20. Decentralized asset management for collaborative sensing

    NASA Astrophysics Data System (ADS)

    Malhotra, Raj P.; Pribilski, Michael J.; Toole, Patrick A.; Agate, Craig

    2017-05-01

    There has been increased impetus to leverage Small Unmanned Aerial Systems (SUAS) for collaborative sensing applications in which many platforms work together to provide critical situation awareness in dynamic environments. Such applications require critical sensor observations to be made at the right place and time to facilitate the detection, tracking, and classification of ground-based objects. This further requires rapid response to real-world events and the balancing of multiple, competing mission objectives. In this context, human operators become overwhelmed with management of many platforms. Further, current automated planning paradigms tend to be centralized and don't scale up well to many collaborating platforms. We introduce a decentralized approach based upon information-theory and distributed fusion which enable us to scale up to large numbers of collaborating Small Unmanned Aerial Systems (SUAS) platforms. This is exercised against a military application involving the autonomous detection, tracking, and classification of critical mobile targets. We further show that, based upon monte-carlo simulation results, our decentralized approach out-performs more static management strategies employed by human operators and achieves similar results to a centralized approach while being scalable and robust to degradation of communication. Finally, we describe the limitations of our approach and future directions for our research.

  1. Aural Classification and Temporal Robustness

    DTIC Science & Technology

    2010-11-01

    Canada – Atlantique ; novembre 2010. Contexte : Le présent projet vise le développement d’un classificateur robuste qui utilise des...10 4.2.2.2 Discriminant score . . . . . . . . . . . . . . . . . . . 11 4.2.3 Principal component analysis . . . . . . . . . . . . . . . . . . . 13 ...allows class separation. . . . . . . . . . . . 13 Figure 7: Hypothetical clutter and target pdfs and posterior probabilties shown as surfaces

  2. Robust pattern decoding in shape-coded structured light

    NASA Astrophysics Data System (ADS)

    Tang, Suming; Zhang, Xu; Song, Zhan; Song, Lifang; Zeng, Hai

    2017-09-01

    Decoding is a challenging and complex problem in a coded structured light system. In this paper, a robust pattern decoding method is proposed for the shape-coded structured light in which the pattern is designed as grid shape with embedded geometrical shapes. In our decoding method, advancements are made at three steps. First, a multi-template feature detection algorithm is introduced to detect the feature point which is the intersection of each two orthogonal grid-lines. Second, pattern element identification is modelled as a supervised classification problem and the deep neural network technique is applied for the accurate classification of pattern elements. Before that, a training dataset is established, which contains a mass of pattern elements with various blurring and distortions. Third, an error correction mechanism based on epipolar constraint, coplanarity constraint and topological constraint is presented to reduce the false matches. In the experiments, several complex objects including human hand are chosen to test the accuracy and robustness of the proposed method. The experimental results show that our decoding method not only has high decoding accuracy, but also owns strong robustness to surface color and complex textures.

  3. A robust data scaling algorithm to improve classification accuracies in biomedical data.

    PubMed

    Cao, Xi Hang; Stojkovic, Ivan; Obradovic, Zoran

    2016-09-09

    Machine learning models have been adapted in biomedical research and practice for knowledge discovery and decision support. While mainstream biomedical informatics research focuses on developing more accurate models, the importance of data preprocessing draws less attention. We propose the Generalized Logistic (GL) algorithm that scales data uniformly to an appropriate interval by learning a generalized logistic function to fit the empirical cumulative distribution function of the data. The GL algorithm is simple yet effective; it is intrinsically robust to outliers, so it is particularly suitable for diagnostic/classification models in clinical/medical applications where the number of samples is usually small; it scales the data in a nonlinear fashion, which leads to potential improvement in accuracy. To evaluate the effectiveness of the proposed algorithm, we conducted experiments on 16 binary classification tasks with different variable types and cover a wide range of applications. The resultant performance in terms of area under the receiver operation characteristic curve (AUROC) and percentage of correct classification showed that models learned using data scaled by the GL algorithm outperform the ones using data scaled by the Min-max and the Z-score algorithm, which are the most commonly used data scaling algorithms. The proposed GL algorithm is simple and effective. It is robust to outliers, so no additional denoising or outlier detection step is needed in data preprocessing. Empirical results also show models learned from data scaled by the GL algorithm have higher accuracy compared to the commonly used data scaling algorithms.

  4. Columbia Classification Algorithm of Suicide Assessment (C-CASA): classification of suicidal events in the FDA's pediatric suicidal risk analysis of antidepressants.

    PubMed

    Posner, Kelly; Oquendo, Maria A; Gould, Madelyn; Stanley, Barbara; Davies, Mark

    2007-07-01

    To evaluate the link between antidepressants and suicidal behavior and ideation (suicidality) in youth, adverse events from pediatric clinical trials were classified in order to identify suicidal events. The authors describe the Columbia Classification Algorithm for Suicide Assessment (C-CASA), a standardized suicidal rating system that provided data for the pediatric suicidal risk analysis of antidepressants conducted by the Food and Drug Administration (FDA). Adverse events (N=427) from 25 pediatric antidepressant clinical trials were systematically identified by pharmaceutical companies. Randomly assigned adverse events were evaluated by three of nine independent expert suicidologists using the Columbia classification algorithm. Reliability of the C-CASA ratings and agreement with pharmaceutical company classification were estimated. Twenty-six new, possibly suicidal events (behavior and ideation) that were not originally identified by pharmaceutical companies were identified in the C-CASA, and 12 events originally labeled as suicidal by pharmaceutical companies were eliminated, which resulted in a total of 38 discrepant ratings. For the specific label of "suicide attempt," a relatively low level of agreement was observed between the C-CASA and pharmaceutical company ratings, with the C-CASA reporting a 50% reduction in ratings. Thus, although the C-CASA resulted in the identification of more suicidal events overall, fewer events were classified as suicide attempts. Additionally, the C-CASA ratings were highly reliable (intraclass correlation coefficient [ICC]=0.89). Utilizing a methodical, anchored approach to categorizing suicidality provides an accurate and comprehensive identification of suicidal events. The FDA's audit of the C-CASA demonstrated excellent transportability of this approach. The Columbia algorithm was used to classify suicidal adverse events in the recent FDA adult antidepressant safety analyses and has also been mandated to be applied to all anticonvulsant trials and other centrally acting agents and nonpsychotropic drugs.

  5. Columbia Classification Algorithm of Suicide Assessment (C-CASA): Classification of Suicidal Events in the FDA’s Pediatric Suicidal Risk Analysis of Antidepressants

    PubMed Central

    Posner, Kelly; Oquendo, Maria A.; Gould, Madelyn; Stanley, Barbara; Davies, Mark

    2013-01-01

    Objective To evaluate the link between antidepressants and suicidal behavior and ideation (suicidality) in youth, adverse events from pediatric clinical trials were classified in order to identify suicidal events. The authors describe the Columbia Classification Algorithm for Suicide Assessment (C-CASA), a standardized suicidal rating system that provided data for the pediatric suicidal risk analysis of antide-pressants conducted by the Food and Drug Administration (FDA). Method Adverse events (N=427) from 25 pediatric antidepressant clinical trials were systematically identified by pharmaceutical companies. Randomly assigned adverse events were evaluated by three of nine independent expert suicidologists using the Columbia classification algorithm. Reliability of the C-CASA ratings and agreement with pharmaceutical company classification were estimated. Results Twenty-six new, possibly suicidal events (behavior and ideation) that were not originally identified by pharmaceutical companies were identified in the C-CASA, and 12 events originally labeled as suicidal by pharmaceutical companies were eliminated, which resulted in a total of 38 discrepant ratings. For the specific label of “suicide attempt,” a relatively low level of agreement was observed between the C-CASA and pharmaceutical company ratings, with the C-CASA reporting a 50% reduction in ratings. Thus, although the C-CASA resulted in the identification of more suicidal events overall, fewer events were classified as suicide attempts. Additionally, the C-CASA ratings were highly reliable (intraclass correlation coefficient [ICC]=0.89). Conclusions Utilizing a methodical, anchored approach to categorizing suicidality provides an accurate and comprehensive identification of suicidal events. The FDA’s audit of the C-CASA demonstrated excellent transportability of this approach. The Columbia algorithm was used to classify suicidal adverse events in the recent FDA adult antidepressant safety analyses and has also been mandated to be applied to all anticonvulsant trials and other centrally acting agents and nonpsychotropic drugs. PMID:17606655

  6. Interrater agreement in visual scoring of neonatal seizures based on majority voting on a web-based system: The Neoguard EEG database.

    PubMed

    Dereymaeker, Anneleen; Ansari, Amir H; Jansen, Katrien; Cherian, Perumpillichira J; Vervisch, Jan; Govaert, Paul; De Wispelaere, Leen; Dielman, Charlotte; Matic, Vladimir; Dorado, Alexander Caicedo; De Vos, Maarten; Van Huffel, Sabine; Naulaers, Gunnar

    2017-09-01

    To assess interrater agreement based on majority voting in visual scoring of neonatal seizures. An online platform was designed based on a multicentre seizure EEG-database. Consensus decision based on 'majority voting' and interrater agreement was estimated using Fleiss' Kappa. The influences of different factors on agreement were determined. 1919 Events extracted from 280h EEG of 71 neonates were reviewed by 4 raters. Majority voting was applied to assign a seizure/non-seizure classification. 44% of events were classified with high, 36% with moderate, and 20% with poor agreement, resulting in a Kappa value of 0.39. 68% of events were labelled as seizures, and in 46%, all raters were convinced about electrographic seizures. The most common seizure duration was <30s. Raters agreed best for seizures lasting 60-120s. There was a significant difference in electrographic characteristics of seizures versus dubious events, with seizures having longer duration, higher power and amplitude. There is a wide variability in identifying rhythmic ictal and non-ictal EEG events, and only the most robust ictal patterns are consistently agreed upon. Database composition and electrographic characteristics are important factors that influence interrater agreement. The use of well-described databases and input of different experts will improve neonatal EEG interpretation and help to develop uniform seizure definitions, useful for evidence-based studies of seizure recognition and management. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  7. Selective classification for improved robustness of myoelectric control under nonideal conditions.

    PubMed

    Scheme, Erik J; Englehart, Kevin B; Hudgins, Bernard S

    2011-06-01

    Recent literature in pattern recognition-based myoelectric control has highlighted a disparity between classification accuracy and the usability of upper limb prostheses. This paper suggests that the conventionally defined classification accuracy may be idealistic and may not reflect true clinical performance. Herein, a novel myoelectric control system based on a selective multiclass one-versus-one classification scheme, capable of rejecting unknown data patterns, is introduced. This scheme is shown to outperform nine other popular classifiers when compared using conventional classification accuracy as well as a form of leave-one-out analysis that may be more representative of real prosthetic use. Additionally, the classification scheme allows for real-time, independent adjustment of individual class-pair boundaries making it flexible and intuitive for clinical use.

  8. Combining TerraSAR-X and SPOT-5 data for object-based landslide detection

    NASA Astrophysics Data System (ADS)

    Friedl, B.; Hölbling, D.; Füreder, P.

    2012-04-01

    Landslide detection and classification is an essential requirement in pre- and post-disaster hazard analysis. In earlier studies landslide detection often was achieved through time-consuming and cost-intensive field surveys and visual orthophoto interpretation. Recent studies show that Earth Observation (EO) data offer new opportunities for fast, reliable and accurate landslide detection and classification, which may conduce to an effective landslide monitoring and landslide hazard management. To ensure the fast recognition and classification of landslides at a regional scale, a (semi-)automated object-based landslide detection approach is established for a study site situated in the Huaguoshan catchment, Southern Taiwan. The study site exhibits a high vulnerability to landslides and debris flows, which are predominantly typhoon-induced. Through the integration of optical satellite data (SPOT-5 with 2.5 m GSD), SAR (Synthetic Aperture Radar) data (TerraSAR-X Spotlight with 2.95 m GSD) and digital elevation information (DEM with 5 m GSD) including its derived products (e.g. slope, curvature, flow accumulation) landslides may be examined in a more efficient way as if relying on single data sources only. The combination of optical and SAR data in an object-based image analysis (OBIA) domain for landslide detection and classification has not been investigated so far, even if SAR imagery show valuable properties for landslide detection, which differ from optical data (e.g. high sensitivity to surface roughness and soil moisture). The main purpose of this study is to recognize and analyze existing landslides by applying object-based image analysis making use of eCognition software. OBIA provides a framework for examining features defined by spectral, spatial, textural, contextual as well as hierarchical properties. Objects are derived through image segmentation and serve as input for the classification process, which relies on transparent rulesets, representing knowledge. Through class modeling, an iterative process of segmentation and classification, objects can be addressed individually in a region-specific manner. The presented approach is marked by the comprehensive use of available data sets from various sources. This full integration of optical, SAR and DEM data conduces to the development of a robust method, which makes use of the most appropriate characteristics (e.g. spectral, textural, contextual) of each data set. The proposed method contributes to a more rapid and accurate landslide mapping in order to assist disaster and crisis management. Especially SAR data proves to be useful in the aftermath of an event, as radar sensors are mostly independent of illumination and weather conditions and therefore data is more likely to be available. The full data integration allows coming up with a robust approach for the detection and classification of landslides. However, more research is needed to make the best of the integration of SAR data in an object-based environment and for making the approach easier adaptable to different study sites and data.

  9. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.

    PubMed

    Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel

    2011-05-09

    Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM classification algorithms as well as fixed grid and interval search for finding appropriate tuning parameters were implemented in our freely available R package 'penalizedSVM'.We conclude that the Elastic SCAD SVM is a flexible and robust tool for classification and feature selection tasks for high-dimensional data such as microarray data sets.

  10. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data

    PubMed Central

    2011-01-01

    Background Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net. We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone. Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Results Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error. Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. Conclusions The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters. The penalized SVM classification algorithms as well as fixed grid and interval search for finding appropriate tuning parameters were implemented in our freely available R package 'penalizedSVM'. We conclude that the Elastic SCAD SVM is a flexible and robust tool for classification and feature selection tasks for high-dimensional data such as microarray data sets. PMID:21554689

  11. 6 CFR 7.24 - Duration of classification.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 6 Domestic Security 1 2013-01-01 2013-01-01 false Duration of classification. 7.24 Section 7.24... INFORMATION Classified Information § 7.24 Duration of classification. (a) At the time of original classification, original classification authorities shall apply a date or event in which the information will be...

  12. 6 CFR 7.24 - Duration of classification.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 6 Domestic Security 1 2014-01-01 2014-01-01 false Duration of classification. 7.24 Section 7.24... INFORMATION Classified Information § 7.24 Duration of classification. (a) At the time of original classification, original classification authorities shall apply a date or event in which the information will be...

  13. 6 CFR 7.24 - Duration of classification.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 6 Domestic Security 1 2012-01-01 2012-01-01 false Duration of classification. 7.24 Section 7.24... INFORMATION Classified Information § 7.24 Duration of classification. (a) At the time of original classification, original classification authorities shall apply a date or event in which the information will be...

  14. 6 CFR 7.24 - Duration of classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 6 Domestic Security 1 2011-01-01 2011-01-01 false Duration of classification. 7.24 Section 7.24... INFORMATION Classified Information § 7.24 Duration of classification. (a) At the time of original classification, original classification authorities shall apply a date or event in which the information will be...

  15. Ensemble Sparse Classification of Alzheimer’s Disease

    PubMed Central

    Liu, Manhua; Zhang, Daoqiang; Shen, Dinggang

    2012-01-01

    The high-dimensional pattern classification methods, e.g., support vector machines (SVM), have been widely investigated for analysis of structural and functional brain images (such as magnetic resonance imaging (MRI)) to assist the diagnosis of Alzheimer’s disease (AD) including its prodromal stage, i.e., mild cognitive impairment (MCI). Most existing classification methods extract features from neuroimaging data and then construct a single classifier to perform classification. However, due to noise and small sample size of neuroimaging data, it is challenging to train only a global classifier that can be robust enough to achieve good classification performance. In this paper, instead of building a single global classifier, we propose a local patch-based subspace ensemble method which builds multiple individual classifiers based on different subsets of local patches and then combines them for more accurate and robust classification. Specifically, to capture the local spatial consistency, each brain image is partitioned into a number of local patches and a subset of patches is randomly selected from the patch pool to build a weak classifier. Here, the sparse representation-based classification (SRC) method, which has shown effective for classification of image data (e.g., face), is used to construct each weak classifier. Then, multiple weak classifiers are combined to make the final decision. We evaluate our method on 652 subjects (including 198 AD patients, 225 MCI and 229 normal controls) from Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using MR images. The experimental results show that our method achieves an accuracy of 90.8% and an area under the ROC curve (AUC) of 94.86% for AD classification and an accuracy of 87.85% and an AUC of 92.90% for MCI classification, respectively, demonstrating a very promising performance of our method compared with the state-of-the-art methods for AD/MCI classification using MR images. PMID:22270352

  16. A study on facial expressions recognition

    NASA Astrophysics Data System (ADS)

    Xu, Jingjing

    2017-09-01

    In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.

  17. A classification of event sequences in the influence network

    NASA Astrophysics Data System (ADS)

    Walsh, James Lyons; Knuth, Kevin H.

    2017-06-01

    We build on the classification in [1] of event sequences in the influence network as respecting collinearity or not, so as to determine in future work what phenomena arise in each case. Collinearity enables each observer to uniquely associate each particle event of influencing with one of the observer's own events, even in the case of events of influencing the other observer. We further classify events as to whether they are spacetime events that obey in the fine-grained case the coarse-grained conditions of [2], finding that Newton's First and Second Laws of motion are obeyed at spacetime events. A proof of Newton's Third Law under particular circumstances is also presented.

  18. Evaluation of Hydrometeor Classification for Winter Mixed-Phase Precipitation Events

    NASA Astrophysics Data System (ADS)

    Hickman, B.; Troemel, S.; Ryzhkov, A.; Simmer, C.

    2016-12-01

    Hydrometeor classification algorithms (HCL) typically discriminate radar echoes into several classes including rain (light, medium, heavy), hail, dry snow, wet snow, ice crystals, graupel and rain-hail mixtures. Despite the strength of HCL for precipitation dominated by a single phase - especially warm-season classification - shortcomings exist for mixed-phase precipitation classification. Properly identifying mixed-phase can lead to more accurate precipitation estimates, and better forecasts for aviation weather and ground warnings. Cold season precipitation classification is also highly important due to their potentially high impact on society (e.g. black ice, ice accumulation, snow loads), but due to the varying nature of the hydrometeor - density, dielectric constant, shape - reliable classification via radar alone is not capable. With the addition of thermodynamic information of the atmosphere, either from weather models or sounding data, it has been possible to extend more and more into winter time precipitation events. Yet, inaccuracies still exist in separating more benign (ice pellets) from more the more hazardous (freezing rain) events. We have investigated winter mixed-phase precipitation cases which include freezing rain, ice pellets, and rain-snow transitions from several events in Germany in order to move towards a reliable nowcasting of winter precipitation in hopes to provide faster, more accurate winter time warnings. All events have been confirmed to have the specified precipitation from ground reports. Classification of the events is achieved via a combination of inputs from a bulk microphysics numerical weather prediction model and the German dual-polarimetric C-band radar network, into a 1D spectral bin microphysical model (SBC) which explicitly treats the processes of melting, refreezing, and ice nucleation to predict four near-surface precipitation types: rain, snow, freezing rain, ice pellets, rain/snow mixture, and freezing rain/pellet mixture. Evaluation of the classification is performed by means of disdrometer data, in-situ ground observations, and eye-witness reports from the European Severe Weather Database (ESWD). Additionally, a comparison to an existing radar based HCL is performed as a sanity check and a performance evaluator.

  19. Representation, Classification and Information Fusion for Robust and Efficient Multimodal Human States Recognition

    ERIC Educational Resources Information Center

    Li, Ming

    2013-01-01

    The goal of this work is to enhance the robustness and efficiency of the multimodal human states recognition task. Human states recognition can be considered as a joint term for identifying/verifing various kinds of human related states, such as biometric identity, language spoken, age, gender, emotion, intoxication level, physical activity, vocal…

  20. Analysis of EEG-fMRI data in focal epilepsy based on automated spike classification and Signal Space Projection.

    PubMed

    Liston, Adam D; De Munck, Jan C; Hamandi, Khalid; Laufs, Helmut; Ossenblok, Pauly; Duncan, John S; Lemieux, Louis

    2006-07-01

    Simultaneous acquisition of EEG and fMRI data enables the investigation of the hemodynamic correlates of interictal epileptiform discharges (IEDs) during the resting state in patients with epilepsy. This paper addresses two issues: (1) the semi-automation of IED classification in statistical modelling for fMRI analysis and (2) the improvement of IED detection to increase experimental fMRI efficiency. For patients with multiple IED generators, sensitivity to IED-correlated BOLD signal changes can be improved when the fMRI analysis model distinguishes between IEDs of differing morphology and field. In an attempt to reduce the subjectivity of visual IED classification, we implemented a semi-automated system, based on the spatio-temporal clustering of EEG events. We illustrate the technique's usefulness using EEG-fMRI data from a subject with focal epilepsy in whom 202 IEDs were visually identified and then clustered semi-automatically into four clusters. Each cluster of IEDs was modelled separately for the purpose of fMRI analysis. This revealed IED-correlated BOLD activations in distinct regions corresponding to three different IED categories. In a second step, Signal Space Projection (SSP) was used to project the scalp EEG onto the dipoles corresponding to each IED cluster. This resulted in 123 previously unrecognised IEDs, the inclusion of which, in the General Linear Model (GLM), increased the experimental efficiency as reflected by significant BOLD activations. We have also shown that the detection of extra IEDs is robust in the face of fluctuations in the set of visually detected IEDs. We conclude that automated IED classification can result in more objective fMRI models of IEDs and significantly increased sensitivity.

  1. Exercise-Associated Collapse in Endurance Events: A Classification System.

    ERIC Educational Resources Information Center

    Roberts, William O.

    1989-01-01

    Describes a classification system devised for exercise-associated collapse in endurance events based on casualties observed at six Twin Cities Marathons. Major diagnostic criteria are body temperature and mental status. Management protocol includes fluid and fuel replacement, temperature correction, and leg cramp treatment. (Author/SM)

  2. Continuity and Discontinuity of Attachment from Infancy through Adolescence.

    ERIC Educational Resources Information Center

    Hamilton, Claire E.

    2000-01-01

    Examined relations between infant security of attachment, negative life events, and adolescent attachment classification in sample from the Family Lifestyles Project. Found that stability of attachment classification was 77 percent. Infant attachment classification predicted adolescent attachment classification. Found no differences between…

  3. A support vector machine approach for classification of welding defects from ultrasonic signals

    NASA Astrophysics Data System (ADS)

    Chen, Yuan; Ma, Hong-Wei; Zhang, Guang-Ming

    2014-07-01

    Defect classification is an important issue in ultrasonic non-destructive evaluation. A layered multi-class support vector machine (LMSVM) classification system, which combines multiple SVM classifiers through a layered architecture, is proposed in this paper. The proposed LMSVM classification system is applied to the classification of welding defects from ultrasonic test signals. The measured ultrasonic defect echo signals are first decomposed into wavelet coefficients by the wavelet packet transform. The energy of the wavelet coefficients at different frequency channels are used to construct the feature vectors. The bees algorithm (BA) is then used for feature selection and SVM parameter optimisation for the LMSVM classification system. The BA-based feature selection optimises the energy feature vectors. The optimised feature vectors are input to the LMSVM classification system for training and testing. Experimental results of classifying welding defects demonstrate that the proposed technique is highly robust, precise and reliable for ultrasonic defect classification.

  4. A new adaptive L1-norm for optimal descriptor selection of high-dimensional QSAR classification model for anti-hepatitis C virus activity of thiourea derivatives.

    PubMed

    Algamal, Z Y; Lee, M H

    2017-01-01

    A high-dimensional quantitative structure-activity relationship (QSAR) classification model typically contains a large number of irrelevant and redundant descriptors. In this paper, a new design of descriptor selection for the QSAR classification model estimation method is proposed by adding a new weight inside L1-norm. The experimental results of classifying the anti-hepatitis C virus activity of thiourea derivatives demonstrate that the proposed descriptor selection method in the QSAR classification model performs effectively and competitively compared with other existing penalized methods in terms of classification performance on both the training and the testing datasets. Moreover, it is noteworthy that the results obtained in terms of stability test and applicability domain provide a robust QSAR classification model. It is evident from the results that the developed QSAR classification model could conceivably be employed for further high-dimensional QSAR classification studies.

  5. Robust location and spread measures for nonparametric probability density function estimation.

    PubMed

    López-Rubio, Ezequiel

    2009-10-01

    Robustness against outliers is a desirable property of any unsupervised learning scheme. In particular, probability density estimators benefit from incorporating this feature. A possible strategy to achieve this goal is to substitute the sample mean and the sample covariance matrix by more robust location and spread estimators. Here we use the L1-median to develop a nonparametric probability density function (PDF) estimator. We prove its most relevant properties, and we show its performance in density estimation and classification applications.

  6. Gender classification under extended operating conditions

    NASA Astrophysics Data System (ADS)

    Rude, Howard N.; Rizki, Mateen

    2014-06-01

    Gender classification is a critical component of a robust image security system. Many techniques exist to perform gender classification using facial features. In contrast, this paper explores gender classification using body features extracted from clothed subjects. Several of the most effective types of features for gender classification identified in literature were implemented and applied to the newly developed Seasonal Weather And Gender (SWAG) dataset. SWAG contains video clips of approximately 2000 samples of human subjects captured over a period of several months. The subjects are wearing casual business attire and outer garments appropriate for the specific weather conditions observed in the Midwest. The results from a series of experiments are presented that compare the classification accuracy of systems that incorporate various types and combinations of features applied to multiple looks at subjects at different image resolutions to determine a baseline performance for gender classification.

  7. Event-Based Robust Control for Uncertain Nonlinear Systems Using Adaptive Dynamic Programming.

    PubMed

    Zhang, Qichao; Zhao, Dongbin; Wang, Ding

    2018-01-01

    In this paper, the robust control problem for a class of continuous-time nonlinear system with unmatched uncertainties is investigated using an event-based control method. First, the robust control problem is transformed into a corresponding optimal control problem with an augmented control and an appropriate cost function. Under the event-based mechanism, we prove that the solution of the optimal control problem can asymptotically stabilize the uncertain system with an adaptive triggering condition. That is, the designed event-based controller is robust to the original uncertain system. Note that the event-based controller is updated only when the triggering condition is satisfied, which can save the communication resources between the plant and the controller. Then, a single network adaptive dynamic programming structure with experience replay technique is constructed to approach the optimal control policies. The stability of the closed-loop system with the event-based control policy and the augmented control policy is analyzed using the Lyapunov approach. Furthermore, we prove that the minimal intersample time is bounded by a nonzero positive constant, which excludes Zeno behavior during the learning process. Finally, two simulation examples are provided to demonstrate the effectiveness of the proposed control scheme.

  8. Teaching Sexual History-Taking Skills Using the Sexual Events Classification System

    ERIC Educational Resources Information Center

    Fidler, Donald C.; Petri, Justin Daniel; Chapman, Mark

    2010-01-01

    Objective: The authors review the literature about educational programs for teaching sexual history-taking skills and describe novel techniques for teaching these skills. Methods: Psychiatric residents enrolled in a brief sexual history-taking course that included instruction on the Sexual Events Classification System, feedback on residents'…

  9. An integration of minimum local feature representation methods to recognize large variation of foods

    NASA Astrophysics Data System (ADS)

    Razali, Mohd Norhisham bin; Manshor, Noridayu; Halin, Alfian Abdul; Mustapha, Norwati; Yaakob, Razali

    2017-10-01

    Local invariant features have shown to be successful in describing object appearances for image classification tasks. Such features are robust towards occlusion and clutter and are also invariant against scale and orientation changes. This makes them suitable for classification tasks with little inter-class similarity and large intra-class difference. In this paper, we propose an integrated representation of the Speeded-Up Robust Feature (SURF) and Scale Invariant Feature Transform (SIFT) descriptors, using late fusion strategy. The proposed representation is used for food recognition from a dataset of food images with complex appearance variations. The Bag of Features (BOF) approach is employed to enhance the discriminative ability of the local features. Firstly, the individual local features are extracted to construct two kinds of visual vocabularies, representing SURF and SIFT. The visual vocabularies are then concatenated and fed into a Linear Support Vector Machine (SVM) to classify the respective food categories. Experimental results demonstrate impressive overall recognition at 82.38% classification accuracy based on the challenging UEC-Food100 dataset.

  10. Hierarchical classification of dynamically varying radar pulse repetition interval modulation patterns.

    PubMed

    Kauppi, Jukka-Pekka; Martikainen, Kalle; Ruotsalainen, Ulla

    2010-12-01

    The central purpose of passive signal intercept receivers is to perform automatic categorization of unknown radar signals. Currently, there is an urgent need to develop intelligent classification algorithms for these devices due to emerging complexity of radar waveforms. Especially multifunction radars (MFRs) capable of performing several simultaneous tasks by utilizing complex, dynamically varying scheduled waveforms are a major challenge for automatic pattern classification systems. To assist recognition of complex radar emissions in modern intercept receivers, we have developed a novel method to recognize dynamically varying pulse repetition interval (PRI) modulation patterns emitted by MFRs. We use robust feature extraction and classifier design techniques to assist recognition in unpredictable real-world signal environments. We classify received pulse trains hierarchically which allows unambiguous detection of the subpatterns using a sliding window. Accuracy, robustness and reliability of the technique are demonstrated with extensive simulations using both static and dynamically varying PRI modulation patterns. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. Broken Robustness Analysis: How to make proper climate change conclusions in contradictory multimodal measurement contexts.

    NASA Astrophysics Data System (ADS)

    Keyser, V.

    2015-12-01

    Philosophers of science discuss how multiple modes of measurement can generate evidence for the existence and character of a phenomenon (Horwich 1982; Hacking 1983; Franklin and Howson 1984; Collins 1985; Sober 1989; Trout 1993; Culp 1995; Keeley 2002; Staley 2004; Weber 2005; Keyser 2012). But how can this work systematically in climate change measurement? Additionally, what conclusions can scientists and policy-makers draw when different modes of measurement fail to be robust by producing contradictory results? First, I present a new technical account of robust measurement (RAMP) that focuses on the physical independence of measurement processes. I detail how physically independent measurement processes "check each other's results." (This account is in contrast to philosophical accounts of robustness analysis that focus on independent model assumptions or independent measurement products or results.) Second, I present a puzzle about contradictory and divergent climate change measures, which has consistently re-emerged in climate measurement. This discussion will focus on land, drilling, troposphere, and computer simulation measures. Third, to systematically solve this climate measurement puzzle, I use RAMP in the context of drought measurement in order to generate a classification of measurement processes. Here, I discuss how multimodal precipitation measures—e.g., measures of precipitation deficit like the Standard Precipitation Index vs. air humidity measures like the Standardized Relative Humidity Index--can help with the classification scheme of climate change measurement processes. Finally, I discuss how this classification of measures can help scientists and policy-makers draw effective conclusions in contradictory multimodal climate change measurement contexts.

  12. A Novel Wearable Sensor-Based Human Activity Recognition Approach Using Artificial Hydrocarbon Networks.

    PubMed

    Ponce, Hiram; Martínez-Villaseñor, María de Lourdes; Miralles-Pechuán, Luis

    2016-07-05

    Human activity recognition has gained more interest in several research communities given that understanding user activities and behavior helps to deliver proactive and personalized services. There are many examples of health systems improved by human activity recognition. Nevertheless, the human activity recognition classification process is not an easy task. Different types of noise in wearable sensors data frequently hamper the human activity recognition classification process. In order to develop a successful activity recognition system, it is necessary to use stable and robust machine learning techniques capable of dealing with noisy data. In this paper, we presented the artificial hydrocarbon networks (AHN) technique to the human activity recognition community. Our artificial hydrocarbon networks novel approach is suitable for physical activity recognition, noise tolerance of corrupted data sensors and robust in terms of different issues on data sensors. We proved that the AHN classifier is very competitive for physical activity recognition and is very robust in comparison with other well-known machine learning methods.

  13. Classification and Space-Time Analysis of Precipitation Events in Manizales, Caldas, Colombia.

    NASA Astrophysics Data System (ADS)

    Suarez Hincapie, J. N.; Vélez, J.; Romo Melo, L.; Chang, P.

    2015-12-01

    Manizales is a mid-mountain Andean city located near the Nevado del Ruiz volcano in west-central Colombia, this location exposes it to earthquakes, floods, landslides and volcanic eruptions. It is located in the intertropical convergence zone (ITCZ) and presents a climate with a bimodal rainfall regime (Cortés, 2010). Its mean annual rainfall is 2000 mm, one may observe precipitation 70% of the days over a year. This rain which favors the formation of large masses of clouds and the presence of macroclimatic phenomenon as "El Niño South Oscillation", has historically caused great impacts in the region (Vélez et al, 2012). For example the geographical location coupled with rain events results in a high risk of landslides in the city. Manizales has a hydrometeorological network of 40 stations that measure and transmit data of up to eight climate variables. Some of these stations keep 10 years of historical data. However, until now this information has not been used for space-time classification of precipitation events, nor has the meteorological variables that influence them been thoroughly researched. The purpose of this study was to classify historical events of rain in an urban area of Manizales and investigate patterns of atmospheric behavior that influence or trigger such events. Classification of events was performed by calculating the "n" index of the heavy rainfall, describing the behavior of precipitation as a function of time throughout the event (Monjo, 2009). The analysis of meteorological variables was performed using statistical quantification over variable time periods before each event. The proposed classification allowed for an analysis of the evolution of rainfall events. Specially, it helped to look for the influence of different meteorological variables triggering rainfall events in hazardous areas as the city of Manizales.

  14. Cloud-Scale Genomic Signals Processing for Robust Large-Scale Cancer Genomic Microarray Data Analysis.

    PubMed

    Harvey, Benjamin Simeon; Ji, Soo-Yeon

    2017-01-01

    As microarray data available to scientists continues to increase in size and complexity, it has become overwhelmingly important to find multiple ways to bring forth oncological inference to the bioinformatics community through the analysis of large-scale cancer genomic (LSCG) DNA and mRNA microarray data that is useful to scientists. Though there have been many attempts to elucidate the issue of bringing forth biological interpretation by means of wavelet preprocessing and classification, there has not been a research effort that focuses on a cloud-scale distributed parallel (CSDP) separable 1-D wavelet decomposition technique for denoising through differential expression thresholding and classification of LSCG microarray data. This research presents a novel methodology that utilizes a CSDP separable 1-D method for wavelet-based transformation in order to initialize a threshold which will retain significantly expressed genes through the denoising process for robust classification of cancer patients. Additionally, the overall study was implemented and encompassed within CSDP environment. The utilization of cloud computing and wavelet-based thresholding for denoising was used for the classification of samples within the Global Cancer Map, Cancer Cell Line Encyclopedia, and The Cancer Genome Atlas. The results proved that separable 1-D parallel distributed wavelet denoising in the cloud and differential expression thresholding increased the computational performance and enabled the generation of higher quality LSCG microarray datasets, which led to more accurate classification results.

  15. Joint Concept Correlation and Feature-Concept Relevance Learning for Multilabel Classification.

    PubMed

    Zhao, Xiaowei; Ma, Zhigang; Li, Zhi; Li, Zhihui

    2018-02-01

    In recent years, multilabel classification has attracted significant attention in multimedia annotation. However, most of the multilabel classification methods focus only on the inherent correlations existing among multiple labels and concepts and ignore the relevance between features and the target concepts. To obtain more robust multilabel classification results, we propose a new multilabel classification method aiming to capture the correlations among multiple concepts by leveraging hypergraph that is proved to be beneficial for relational learning. Moreover, we consider mining feature-concept relevance, which is often overlooked by many multilabel learning algorithms. To better show the feature-concept relevance, we impose a sparsity constraint on the proposed method. We compare the proposed method with several other multilabel classification methods and evaluate the classification performance by mean average precision on several data sets. The experimental results show that the proposed method outperforms the state-of-the-art methods.

  16. Event-Related fMRI of Category Learning: Differences in Classification and Feedback Networks

    ERIC Educational Resources Information Center

    Little, Deborah M.; Shin, Silvia S.; Sisco, Shannon M.; Thulborn, Keith R.

    2006-01-01

    Eighteen healthy young adults underwent event-related (ER) functional magnetic resonance imaging (fMRI) of the brain while performing a visual category learning task. The specific category learning task required subjects to extract the rules that guide classification of quasi-random patterns of dots into categories. Following each classification…

  17. Cell communities and robustness in development.

    PubMed

    Monk, N A

    1997-11-01

    The robustness of patterning events in development is a key feature that must be accounted for in proposed models of these events. When considering explicitly cellular systems, robustness can be exhibited at different levels of organization. Consideration of two widespread patterning mechanisms suggests that robustness at the level of cell communities can result from variable development at the level of individual cells; models of these mechanisms show how interactions between participating cells guarantee community-level robustness. Cooperative interactions enhance homogeneity within communities of like cells and the sharpness of boundaries between communities of distinct cells, while competitive interactions amplify small inhomogeneities within communities of initially equivalent cells, resulting in fine-grained patterns of cell specialization.

  18. Classification and definition of misuse, abuse, and related events in clinical trials: ACTTION systematic review and recommendations.

    PubMed

    Smith, Shannon M; Dart, Richard C; Katz, Nathaniel P; Paillard, Florence; Adams, Edgar H; Comer, Sandra D; Degroot, Aldemar; Edwards, Robert R; Haddox, J David; Jaffe, Jerome H; Jones, Christopher M; Kleber, Herbert D; Kopecky, Ernest A; Markman, John D; Montoya, Ivan D; O'Brien, Charles; Roland, Carl L; Stanton, Marsha; Strain, Eric C; Vorsanger, Gary; Wasan, Ajay D; Weiss, Roger D; Turk, Dennis C; Dworkin, Robert H

    2013-11-01

    As the nontherapeutic use of prescription medications escalates, serious associated consequences have also increased. This makes it essential to estimate misuse, abuse, and related events (MAREs) in the development and postmarketing adverse event surveillance and monitoring of prescription drugs accurately. However, classifications and definitions to describe prescription drug MAREs differ depending on the purpose of the classification system, may apply to single events or ongoing patterns of inappropriate use, and are not standardized or systematically employed, thereby complicating the ability to assess MARE occurrence adequately. In a systematic review of existing prescription drug MARE terminology and definitions from consensus efforts, review articles, and major institutions and agencies, MARE terms were often defined inconsistently or idiosyncratically, or had definitions that overlapped with other MARE terms. The Analgesic, Anesthetic, and Addiction Clinical Trials, Translations, Innovations, Opportunities, and Networks (ACTTION) public-private partnership convened an expert panel to develop mutually exclusive and exhaustive consensus classifications and definitions of MAREs occurring in clinical trials of analgesic medications to increase accuracy and consistency in characterizing their occurrence and prevalence in clinical trials. The proposed ACTTION classifications and definitions are designed as a first step in a system to adjudicate MAREs that occur in analgesic clinical trials and postmarketing adverse event surveillance and monitoring, which can be used in conjunction with other methods of assessing a treatment's abuse potential. Copyright © 2013 International Association for the Study of Pain. All rights reserved.

  19. Machine Learning for Biological Trajectory Classification Applications

    NASA Technical Reports Server (NTRS)

    Sbalzarini, Ivo F.; Theriot, Julie; Koumoutsakos, Petros

    2002-01-01

    Machine-learning techniques, including clustering algorithms, support vector machines and hidden Markov models, are applied to the task of classifying trajectories of moving keratocyte cells. The different algorithms axe compared to each other as well as to expert and non-expert test persons, using concepts from signal-detection theory. The algorithms performed very well as compared to humans, suggesting a robust tool for trajectory classification in biological applications.

  20. A Robust Step Detection Algorithm and Walking Distance Estimation Based on Daily Wrist Activity Recognition Using a Smart Band.

    PubMed

    Trong Bui, Duong; Nguyen, Nhan Duc; Jeong, Gu-Min

    2018-06-25

    Human activity recognition and pedestrian dead reckoning are an interesting field because of their importance utilities in daily life healthcare. Currently, these fields are facing many challenges, one of which is the lack of a robust algorithm with high performance. This paper proposes a new method to implement a robust step detection and adaptive distance estimation algorithm based on the classification of five daily wrist activities during walking at various speeds using a smart band. The key idea is that the non-parametric adaptive distance estimator is performed after two activity classifiers and a robust step detector. In this study, two classifiers perform two phases of recognizing five wrist activities during walking. Then, a robust step detection algorithm, which is integrated with an adaptive threshold, peak and valley correction algorithm, is applied to the classified activities to detect the walking steps. In addition, the misclassification activities are fed back to the previous layer. Finally, three adaptive distance estimators, which are based on a non-parametric model of the average walking speed, calculate the length of each strike. The experimental results show that the average classification accuracy is about 99%, and the accuracy of the step detection is 98.7%. The error of the estimated distance is 2.2⁻4.2% depending on the type of wrist activities.

  1. A bio-inspired system for spatio-temporal recognition in static and video imagery

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Moore, Christopher K.; Chelian, Suhas

    2007-04-01

    This paper presents a bio-inspired method for spatio-temporal recognition in static and video imagery. It builds upon and extends our previous work on a bio-inspired Visual Attention and object Recognition System (VARS). The VARS approach locates and recognizes objects in a single frame. This work presents two extensions of VARS. The first extension is a Scene Recognition Engine (SCE) that learns to recognize spatial relationships between objects that compose a particular scene category in static imagery. This could be used for recognizing the category of a scene, e.g., office vs. kitchen scene. The second extension is the Event Recognition Engine (ERE) that recognizes spatio-temporal sequences or events in sequences. This extension uses a working memory model to recognize events and behaviors in video imagery by maintaining and recognizing ordered spatio-temporal sequences. The working memory model is based on an ARTSTORE1 neural network that combines an ART-based neural network with a cascade of sustained temporal order recurrent (STORE)1 neural networks. A series of Default ARTMAP classifiers ascribes event labels to these sequences. Our preliminary studies have shown that this extension is robust to variations in an object's motion profile. We evaluated the performance of the SCE and ERE on real datasets. The SCE module was tested on a visual scene classification task using the LabelMe2 dataset. The ERE was tested on real world video footage of vehicles and pedestrians in a street scene. Our system is able to recognize the events in this footage involving vehicles and pedestrians.

  2. An assessment of the effectiveness of a random forest classifier for land-cover classification

    NASA Astrophysics Data System (ADS)

    Rodriguez-Galiano, V. F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J. P.

    2012-01-01

    Land cover monitoring using remotely sensed data requires robust classification methods which allow for the accurate mapping of complex land cover and land use categories. Random forest (RF) is a powerful machine learning classifier that is relatively unknown in land remote sensing and has not been evaluated thoroughly by the remote sensing community compared to more conventional pattern recognition techniques. Key advantages of RF include: their non-parametric nature; high classification accuracy; and capability to determine variable importance. However, the split rules for classification are unknown, therefore RF can be considered to be black box type classifier. RF provides an algorithm for estimating missing values; and flexibility to perform several types of data analysis, including regression, classification, survival analysis, and unsupervised learning. In this paper, the performance of the RF classifier for land cover classification of a complex area is explored. Evaluation was based on several criteria: mapping accuracy, sensitivity to data set size and noise. Landsat-5 Thematic Mapper data captured in European spring and summer were used with auxiliary variables derived from a digital terrain model to classify 14 different land categories in the south of Spain. Results show that the RF algorithm yields accurate land cover classifications, with 92% overall accuracy and a Kappa index of 0.92. RF is robust to training data reduction and noise because significant differences in kappa values were only observed for data reduction and noise addition values greater than 50 and 20%, respectively. Additionally, variables that RF identified as most important for classifying land cover coincided with expectations. A McNemar test indicates an overall better performance of the random forest model over a single decision tree at the 0.00001 significance level.

  3. (Machine) learning to do more with less

    NASA Astrophysics Data System (ADS)

    Cohen, Timothy; Freytsis, Marat; Ostdiek, Bryan

    2018-02-01

    Determining the best method for training a machine learning algorithm is critical to maximizing its ability to classify data. In this paper, we compare the standard "fully supervised" approach (which relies on knowledge of event-by-event truth-level labels) with a recent proposal that instead utilizes class ratios as the only discriminating information provided during training. This so-called "weakly supervised" technique has access to less information than the fully supervised method and yet is still able to yield impressive discriminating power. In addition, weak supervision seems particularly well suited to particle physics since quantum mechanics is incompatible with the notion of mapping an individual event onto any single Feynman diagram. We examine the technique in detail — both analytically and numerically — with a focus on the robustness to issues of mischaracterizing the training samples. Weakly supervised networks turn out to be remarkably insensitive to a class of systematic mismodeling. Furthermore, we demonstrate that the event level outputs for weakly versus fully supervised networks are probing different kinematics, even though the numerical quality metrics are essentially identical. This implies that it should be possible to improve the overall classification ability by combining the output from the two types of networks. For concreteness, we apply this technology to a signature of beyond the Standard Model physics to demonstrate that all these impressive features continue to hold in a scenario of relevance to the LHC. Example code is provided on GitHub.

  4. Robust Covariate-Adjusted Log-Rank Statistics and Corresponding Sample Size Formula for Recurrent Events Data

    PubMed Central

    Song, Rui; Kosorok, Michael R.; Cai, Jianwen

    2009-01-01

    Summary Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study. PMID:18162107

  5. Using FDA reports to inform a classification for health information technology safety problems

    PubMed Central

    Ong, Mei-Sing; Runciman, William; Coiera, Enrico

    2011-01-01

    Objective To expand an emerging classification for problems with health information technology (HIT) using reports submitted to the US Food and Drug Administration Manufacturer and User Facility Device Experience (MAUDE) database. Design HIT events submitted to MAUDE were retrieved using a standardized search strategy. Using an emerging classification with 32 categories of HIT problems, a subset of relevant events were iteratively analyzed to identify new categories. Two coders then independently classified the remaining events into one or more categories. Free-text descriptions were analyzed to identify the consequences of events. Measurements Descriptive statistics by number of reported problems per category and by consequence; inter-rater reliability analysis using the κ statistic for the major categories and consequences. Results A search of 899 768 reports from January 2008 to July 2010 yielded 1100 reports about HIT. After removing duplicate and unrelated reports, 678 reports describing 436 events remained. The authors identified four new categories to describe problems with software functionality, system configuration, interface with devices, and network configuration; the authors' classification with 32 categories of HIT problems was expanded by the addition of these four categories. Examination of the 436 events revealed 712 problems, 96% were machine-related, and 4% were problems at the human–computer interface. Almost half (46%) of the events related to hazardous circumstances. Of the 46 events (11%) associated with patient harm, four deaths were linked to HIT problems (0.9% of 436 events). Conclusions Only 0.1% of the MAUDE reports searched were related to HIT. Nevertheless, Food and Drug Administration reports did prove to be a useful new source of information about the nature of software problems and their safety implications with potential to inform strategies for safe design and implementation. PMID:21903979

  6. CLustre: semi-automated lineament clustering for palaeo-glacial reconstruction

    NASA Astrophysics Data System (ADS)

    Smith, Mike; Anders, Niels; Keesstra, Saskia

    2016-04-01

    Palaeo glacial reconstructions, or "inversions", using evidence from the palimpsest landscape are increasingly being undertaken with larger and larger databases. Predominant in landform evidence is the lineament (or drumlin) where the biggest datasets number in excess of 50,000 individual forms. One stage in the inversion process requires the identification of lineaments that are generically similar and then their subsequent interpretation in to a coherent chronology of events. Here we present CLustre, a semi-authomated algorithm that clusters lineaments using a locally adaptive, region growing, method. This is initially tested using 1,500 model runs on a synthetic dataset, before application to two case studies (where manual clustering has been undertaken by independent researchers): (1) Dubawnt Lake, Canada and (2) Victoria island, Canada. Results using the synthetic data show that classifications are robust in most scenarios, although specific cases of cross-cutting lineaments may lead to incorrect clusters. Application to the case studies showed a very good match to existing published work, with differences related to limited numbers of unclassified lineaments and parallel cross-cutting lineaments. The value in CLustre comes from the semi-automated, objective, application of a classification method that is repeatable. Once classified, summary statistics of lineament groups can be calculated and then used in the inversion.

  7. A nonlinear heartbeat dynamics model approach for personalized emotion recognition.

    PubMed

    Valenza, Gaetano; Citi, Luca; Lanatà, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo

    2013-01-01

    Emotion recognition based on autonomic nervous system signs is one of the ambitious goals of affective computing. It is well-accepted that standard signal processing techniques require relative long-time series of multivariate records to ensure reliability and robustness of recognition and classification algorithms. In this work, we present a novel methodology able to assess cardiovascular dynamics during short-time (i.e. < 10 seconds) affective stimuli, thus overcoming some of the limitations of current emotion recognition approaches. We developed a personalized, fully parametric probabilistic framework based on point-process theory where heartbeat events are modelled using a 2(nd)-order nonlinear autoregressive integrative structure in order to achieve effective performances in short-time affective assessment. Experimental results show a comprehensive emotional characterization of 4 subjects undergoing a passive affective elicitation using a sequence of standardized images gathered from the international affective picture system. Each picture was identified by the IAPS arousal and valence scores as well as by a self-reported emotional label associating a subjective positive or negative emotion. Results show a clear classification of two defined levels of arousal, valence and self-emotional state using features coming from the instantaneous spectrum and bispectrum of the considered RR intervals, reaching up to 90% recognition accuracy.

  8. Waveform classification and statistical analysis of seismic precursors to the July 2008 Vulcanian Eruption of Soufrière Hills Volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Rodgers, Mel; Smith, Patrick; Pyle, David; Mather, Tamsin

    2016-04-01

    Understanding the transition between quiescence and eruption at dome-forming volcanoes, such as Soufrière Hills Volcano (SHV), Montserrat, is important for monitoring volcanic activity during long-lived eruptions. Statistical analysis of seismic events (e.g. spectral analysis and identification of multiplets via cross-correlation) can be useful for characterising seismicity patterns and can be a powerful tool for analysing temporal changes in behaviour. Waveform classification is crucial for volcano monitoring, but consistent classification, both during real-time analysis and for retrospective analysis of previous volcanic activity, remains a challenge. Automated classification allows consistent re-classification of events. We present a machine learning (random forest) approach to rapidly classify waveforms that requires minimal training data. We analyse the seismic precursors to the July 2008 Vulcanian explosion at SHV and show systematic changes in frequency content and multiplet behaviour that had not previously been recognised. These precursory patterns of seismicity may be interpreted as changes in pressure conditions within the conduit during magma ascent and could be linked to magma flow rates. Frequency analysis of the different waveform classes supports the growing consensus that LP and Hybrid events should be considered end members of a continuum of low-frequency source processes. By using both supervised and unsupervised machine-learning methods we investigate the nature of waveform classification and assess current classification schemes.

  9. Weakly supervised classification in high energy physics

    DOE PAGES

    Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; ...

    2017-05-01

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less

  10. Weakly supervised classification in high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less

  11. Data Mining Research with the LSST

    NASA Astrophysics Data System (ADS)

    Borne, Kirk D.; Strauss, M. A.; Tyson, J. A.

    2007-12-01

    The LSST catalog database will exceed 10 petabytes, comprising several hundred attributes for 5 billion galaxies, 10 billion stars, and over 1 billion variable sources (optical variables, transients, or moving objects), extracted from over 20,000 square degrees of deep imaging in 5 passbands with thorough time domain coverage: 1000 visits over the 10-year LSST survey lifetime. The opportunities are enormous for novel scientific discoveries within this rich time-domain ultra-deep multi-band survey database. Data Mining, Machine Learning, and Knowledge Discovery research opportunities with the LSST are now under study, with a potential for new collaborations to develop to contribute to these investigations. We will describe features of the LSST science database that are amenable to scientific data mining, object classification, outlier identification, anomaly detection, image quality assurance, and survey science validation. We also give some illustrative examples of current scientific data mining research in astronomy, and point out where new research is needed. In particular, the data mining research community will need to address several issues in the coming years as we prepare for the LSST data deluge. The data mining research agenda includes: scalability (at petabytes scales) of existing machine learning and data mining algorithms; development of grid-enabled parallel data mining algorithms; designing a robust system for brokering classifications from the LSST event pipeline (which may produce 10,000 or more event alerts per night); multi-resolution methods for exploration of petascale databases; visual data mining algorithms for visual exploration of the data; indexing of multi-attribute multi-dimensional astronomical databases (beyond RA-Dec spatial indexing) for rapid querying of petabyte databases; and more. Finally, we will identify opportunities for synergistic collaboration between the data mining research group and the LSST Data Management and Science Collaboration teams.

  12. The Influence of temporal sampling regime on the WFD classification of catchments within the Eden Demonstration Test Catchment Project

    NASA Astrophysics Data System (ADS)

    Jonczyk, Jennine; Haygarth, Phil; Quinn, Paul; Reaney, Sim

    2014-05-01

    A high temporal resolution data set from the Eden Demonstration Test Catchment (DTC) project is used to investigate the processes causing pollution and the influence of temporal sampling regime on the WFD classification of three catchments. This data highlights WFD standards may not be fit for purpose. The Eden DTC project is part of a UK government-funded project designed to provide robust evidence regarding how diffuse pollution can be cost-effectively controlled to improve and maintain water quality in rural river catchments. The impact of multiple water quality parameters on ecosystems and sustainable food production are being studied at the catchment scale. Three focus catchments approximately 10 km2 each, have been selected to represent the different farming practices and geophysical characteristics across the Eden catchment, Northern England. A field experimental programme has been designed to monitor the dynamics of agricultural diffuse pollution at multiple scales using state of the art sensors providing continuous real time data. The data set, which includes Total Phosphorus and Total Reactive Phosphorus, Nitrate, Ammonium, pH, Conductivity, Turbidity and Chlorophyll a reveals the frequency and duration of nutrient concentration target exceedance which arises from the prevalence of storm events of increasing magnitude. This data set is sub-sampled at different time intervals to explore how different sampling regimes affects our understanding of nutrient dynamics and the ramification of the different regimes to WFD chemical status. This presentation seeks to identify an optimum temporal resolution of data for effective catchment management and to question the usefulness of the WFD status metric for determining health of a system. Criteria based on high frequency short duration events needs to be accounted for.

  13. Adaptive neuro-fuzzy inference systems for semi-automatic discrimination between seismic events: a study in Tehran region

    NASA Astrophysics Data System (ADS)

    Vasheghani Farahani, Jamileh; Zare, Mehdi; Lucas, Caro

    2012-04-01

    Thisarticle presents an adaptive neuro-fuzzy inference system (ANFIS) for classification of low magnitude seismic events reported in Iran by the network of Tehran Disaster Mitigation and Management Organization (TDMMO). ANFIS classifiers were used to detect seismic events using six inputs that defined the seismic events. Neuro-fuzzy coding was applied using the six extracted features as ANFIS inputs. Two types of events were defined: weak earthquakes and mining blasts. The data comprised 748 events (6289 signals) ranging from magnitude 1.1 to 4.6 recorded at 13 seismic stations between 2004 and 2009. We surveyed that there are almost 223 earthquakes with M ≤ 2.2 included in this database. Data sets from the south, east, and southeast of the city of Tehran were used to evaluate the best short period seismic discriminants, and features as inputs such as origin time of event, distance (source to station), latitude of epicenter, longitude of epicenter, magnitude, and spectral analysis (fc of the Pg wave) were used, increasing the rate of correct classification and decreasing the confusion rate between weak earthquakes and quarry blasts. The performance of the ANFIS model was evaluated for training and classification accuracy. The results confirmed that the proposed ANFIS model has good potential for determining seismic events.

  14. Extreme Sparse Multinomial Logistic Regression: A Fast and Robust Framework for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Cao, Faxian; Yang, Zhijing; Ren, Jinchang; Ling, Wing-Kuen; Zhao, Huimin; Marshall, Stephen

    2017-12-01

    Although the sparse multinomial logistic regression (SMLR) has provided a useful tool for sparse classification, it suffers from inefficacy in dealing with high dimensional features and manually set initial regressor values. This has significantly constrained its applications for hyperspectral image (HSI) classification. In order to tackle these two drawbacks, an extreme sparse multinomial logistic regression (ESMLR) is proposed for effective classification of HSI. First, the HSI dataset is projected to a new feature space with randomly generated weight and bias. Second, an optimization model is established by the Lagrange multiplier method and the dual principle to automatically determine a good initial regressor for SMLR via minimizing the training error and the regressor value. Furthermore, the extended multi-attribute profiles (EMAPs) are utilized for extracting both the spectral and spatial features. A combinational linear multiple features learning (MFL) method is proposed to further enhance the features extracted by ESMLR and EMAPs. Finally, the logistic regression via the variable splitting and the augmented Lagrangian (LORSAL) is adopted in the proposed framework for reducing the computational time. Experiments are conducted on two well-known HSI datasets, namely the Indian Pines dataset and the Pavia University dataset, which have shown the fast and robust performance of the proposed ESMLR framework.

  15. Robust multitask learning with three-dimensional empirical mode decomposition-based features for hyperspectral classification

    NASA Astrophysics Data System (ADS)

    He, Zhi; Liu, Lin

    2016-11-01

    Empirical mode decomposition (EMD) and its variants have recently been applied for hyperspectral image (HSI) classification due to their ability to extract useful features from the original HSI. However, it remains a challenging task to effectively exploit the spectral-spatial information by the traditional vector or image-based methods. In this paper, a three-dimensional (3D) extension of EMD (3D-EMD) is proposed to naturally treat the HSI as a cube and decompose the HSI into varying oscillations (i.e. 3D intrinsic mode functions (3D-IMFs)). To achieve fast 3D-EMD implementation, 3D Delaunay triangulation (3D-DT) is utilized to determine the distances of extrema, while separable filters are adopted to generate the envelopes. Taking the extracted 3D-IMFs as features of different tasks, robust multitask learning (RMTL) is further proposed for HSI classification. In RMTL, pairs of low-rank and sparse structures are formulated by trace-norm and l1,2 -norm to capture task relatedness and specificity, respectively. Moreover, the optimization problems of RMTL can be efficiently solved by the inexact augmented Lagrangian method (IALM). Compared with several state-of-the-art feature extraction and classification methods, the experimental results conducted on three benchmark data sets demonstrate the superiority of the proposed methods.

  16. Robust Classification and Segmentation of Planar and Linear Features for Construction Site Progress Monitoring and Structural Dimension Compliance Control

    NASA Astrophysics Data System (ADS)

    Maalek, R.; Lichti, D. D.; Ruwanpura, J.

    2015-08-01

    The application of terrestrial laser scanners (TLSs) on construction sites for automating construction progress monitoring and controlling structural dimension compliance is growing markedly. However, current research in construction management relies on the planned building information model (BIM) to assign the accumulated point clouds to their corresponding structural elements, which may not be reliable in cases where the dimensions of the as-built structure differ from those of the planned model and/or the planned model is not available with sufficient detail. In addition outliers exist in construction site datasets due to data artefacts caused by moving objects, occlusions and dust. In order to overcome the aforementioned limitations, a novel method for robust classification and segmentation of planar and linear features is proposed to reduce the effects of outliers present in the LiDAR data collected from construction sites. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a robust clustering method. A method is also proposed to robustly extract the points belonging to the flat-slab floors and/or ceilings without performing the aforementioned stages in order to preserve computational efficiency. The applicability of the proposed method is investigated in two scenarios, namely, a laboratory with 30 million points and an actual construction site with over 150 million points. The results obtained by the two experiments validate the suitability of the proposed method for robust segmentation of planar and linear features in contaminated datasets, such as those collected from construction sites.

  17. Polsar Land Cover Classification Based on Hidden Polarimetric Features in Rotation Domain and Svm Classifier

    NASA Astrophysics Data System (ADS)

    Tao, C.-S.; Chen, S.-W.; Li, Y.-Z.; Xiao, S.-P.

    2017-09-01

    Land cover classification is an important application for polarimetric synthetic aperture radar (PolSAR) data utilization. Rollinvariant polarimetric features such as H / Ani / α / Span are commonly adopted in PolSAR land cover classification. However, target orientation diversity effect makes PolSAR images understanding and interpretation difficult. Only using the roll-invariant polarimetric features may introduce ambiguity in the interpretation of targets' scattering mechanisms and limit the followed classification accuracy. To address this problem, this work firstly focuses on hidden polarimetric feature mining in the rotation domain along the radar line of sight using the recently reported uniform polarimetric matrix rotation theory and the visualization and characterization tool of polarimetric coherence pattern. The former rotates the acquired polarimetric matrix along the radar line of sight and fully describes the rotation characteristics of each entry of the matrix. Sets of new polarimetric features are derived to describe the hidden scattering information of the target in the rotation domain. The latter extends the traditional polarimetric coherence at a given rotation angle to the rotation domain for complete interpretation. A visualization and characterization tool is established to derive new polarimetric features for hidden information exploration. Then, a classification scheme is developed combing both the selected new hidden polarimetric features in rotation domain and the commonly used roll-invariant polarimetric features with a support vector machine (SVM) classifier. Comparison experiments based on AIRSAR and multi-temporal UAVSAR data demonstrate that compared with the conventional classification scheme which only uses the roll-invariant polarimetric features, the proposed classification scheme achieves both higher classification accuracy and better robustness. For AIRSAR data, the overall classification accuracy with the proposed classification scheme is 94.91 %, while that with the conventional classification scheme is 93.70 %. Moreover, for multi-temporal UAVSAR data, the averaged overall classification accuracy with the proposed classification scheme is up to 97.08 %, which is much higher than the 87.79 % from the conventional classification scheme. Furthermore, for multitemporal PolSAR data, the proposed classification scheme can achieve better robustness. The comparison studies also clearly demonstrate that mining and utilization of hidden polarimetric features and information in the rotation domain can gain the added benefits for PolSAR land cover classification and provide a new vision for PolSAR image interpretation and application.

  18. Toward semantic-based retrieval of visual information: a model-based approach

    NASA Astrophysics Data System (ADS)

    Park, Youngchoon; Golshani, Forouzan; Panchanathan, Sethuraman

    2002-07-01

    This paper center around the problem of automated visual content classification. To enable classification based image or visual object retrieval, we propose a new image representation scheme called visual context descriptor (VCD) that is a multidimensional vector in which each element represents the frequency of a unique visual property of an image or a region. VCD utilizes the predetermined quality dimensions (i.e., types of features and quantization level) and semantic model templates mined in priori. Not only observed visual cues, but also contextually relevant visual features are proportionally incorporated in VCD. Contextual relevance of a visual cue to a semantic class is determined by using correlation analysis of ground truth samples. Such co-occurrence analysis of visual cues requires transformation of a real-valued visual feature vector (e.g., color histogram, Gabor texture, etc.,) into a discrete event (e.g., terms in text). Good-feature to track, rule of thirds, iterative k-means clustering and TSVQ are involved in transformation of feature vectors into unified symbolic representations called visual terms. Similarity-based visual cue frequency estimation is also proposed and used for ensuring the correctness of model learning and matching since sparseness of sample data causes the unstable results of frequency estimation of visual cues. The proposed method naturally allows integration of heterogeneous visual or temporal or spatial cues in a single classification or matching framework, and can be easily integrated into a semantic knowledge base such as thesaurus, and ontology. Robust semantic visual model template creation and object based image retrieval are demonstrated based on the proposed content description scheme.

  19. Classifiers utilized to enhance acoustic based sensors to identify round types of artillery/mortar

    NASA Astrophysics Data System (ADS)

    Grasing, David; Desai, Sachi; Morcos, Amir

    2008-04-01

    Feature extraction methods based on the statistical analysis of the change in event pressure levels over a period and the level of ambient pressure excitation facilitate the development of a robust classification algorithm. The features reliably discriminates mortar and artillery variants via acoustic signals produced during the launch events. Utilizing acoustic sensors to exploit the sound waveform generated from the blast for the identification of mortar and artillery variants as type A, etcetera through analysis of the waveform. Distinct characteristics arise within the different mortar/artillery variants because varying HE mortar payloads and related charges emphasize varying size events at launch. The waveform holds various harmonic properties distinct to a given mortar/artillery variant that through advanced signal processing and data mining techniques can employed to classify a given type. The skewness and other statistical processing techniques are used to extract the predominant components from the acoustic signatures at ranges exceeding 3000m. Exploiting these techniques will help develop a feature set highly independent of range, providing discrimination based on acoustic elements of the blast wave. Highly reliable discrimination will be achieved with a feedforward neural network classifier trained on a feature space derived from the distribution of statistical coefficients, frequency spectrum, and higher frequency details found within different energy bands. The processes that are described herein extend current technologies, which emphasis acoustic sensor systems to provide such situational awareness.

  20. International Classification of Impairments, Disabilities, and Handicaps: A Manual of Classification Relating to the Consequences of Disease.

    ERIC Educational Resources Information Center

    World Health Organization, Geneva (Switzerland).

    The manual contains three classifications (impairments, disabilities, and handicaps), each relating to a different plane of experience consequent upon disease. Section 1 attempts to clarify the nature of health related experiences by addressing reponse to acute and chronic illness; the unifying framework for classification (principle events in the…

  1. A multi-temporal fusion-based approach for land cover mapping in support of nuclear incident response

    NASA Astrophysics Data System (ADS)

    Sah, Shagan

    An increasingly important application of remote sensing is to provide decision support during emergency response and disaster management efforts. Land cover maps constitute one such useful application product during disaster events; if generated rapidly after any disaster, such map products can contribute to the efficacy of the response effort. In light of recent nuclear incidents, e.g., after the earthquake/tsunami in Japan (2011), our research focuses on constructing rapid and accurate land cover maps of the impacted area in case of an accidental nuclear release. The methodology involves integration of results from two different approaches, namely coarse spatial resolution multi-temporal and fine spatial resolution imagery, to increase classification accuracy. Although advanced methods have been developed for classification using high spatial or temporal resolution imagery, only a limited amount of work has been done on fusion of these two remote sensing approaches. The presented methodology thus involves integration of classification results from two different remote sensing modalities in order to improve classification accuracy. The data used included RapidEye and MODIS scenes over the Nine Mile Point Nuclear Power Station in Oswego (New York, USA). The first step in the process was the construction of land cover maps from freely available, high temporal resolution, low spatial resolution MODIS imagery using a time-series approach. We used the variability in the temporal signatures among different land cover classes for classification. The time series-specific features were defined by various physical properties of a pixel, such as variation in vegetation cover and water content over time. The pixels were classified into four land cover classes - forest, urban, water, and vegetation - using Euclidean and Mahalanobis distance metrics. On the other hand, a high spatial resolution commercial satellite, such as RapidEye, can be tasked to capture images over the affected area in the case of a nuclear event. This imagery served as a second source of data to augment results from the time series approach. The classifications from the two approaches were integrated using an a posteriori probability-based fusion approach. This was done by establishing a relationship between the classes, obtained after classification of the two data sources. Despite the coarse spatial resolution of MODIS pixels, acceptable accuracies were obtained using time series features. The overall accuracies using the fusion-based approach were in the neighborhood of 80%, when compared with GIS data sets from New York State. This fusion thus contributed to classification accuracy refinement, with a few additional advantages, such as correction for cloud cover and providing for an approach that is robust against point-in-time seasonal anomalies, due to the inclusion of multi-temporal data. We concluded that this approach is capable of generating land cover maps of acceptable accuracy and rapid turnaround, which in turn can yield reliable estimates of crop acreage of a region. The final algorithm is part of an automated software tool, which can be used by emergency response personnel to generate a nuclear ingestion pathway information product within a few hours of data collection.

  2. The association between previous and future severe exacerbations of chronic obstructive pulmonary disease: Updating the literature using robust statistical methodology.

    PubMed

    Sadatsafavi, Mohsen; Xie, Hui; Etminan, Mahyar; Johnson, Kate; FitzGerald, J Mark

    2018-01-01

    There is minimal evidence on the extent to which the occurrence of a severe acute exacerbation of COPD that results in hospitalization affects the subsequent disease course. Previous studies on this topic did not generate causally-interpretable estimates. Our aim was to use corrected methodology to update previously reported estimates of the associations between previous and future exacerbations in these patients. Using administrative health data in British Columbia, Canada (1997-2012), we constructed a cohort of patients with at least one severe exacerbation, defined as an episode of inpatient care with the main diagnosis of COPD based on international classification of diseases (ICD) codes. We applied a random-effects 'joint frailty' survival model that is particularly developed for the analysis of recurrent events in the presence of competing risk of death and heterogeneity among individuals in their rate of events. Previous severe exacerbations entered the model as dummy-coded time-dependent covariates, and the model was adjusted for several observable patient and disease characteristics. 35,994 individuals (mean age at baseline 73.7, 49.8% female, average follow-up 3.21 years) contributed 34,271 severe exacerbations during follow-up. The first event was associated with a hazard ratio (HR) of 1.75 (95%CI 1.69-1.82) for the risk of future severe exacerbations. This risk decreased to HR = 1.36 (95%CI 1.30-1.42) for the second event and to 1.18 (95%CI 1.12-1.25) for the third event. The first two severe exacerbations that occurred during follow-up were also significantly associated with increased risk of all-cause mortality. There was substantial heterogeneity in the individual-specific rate of severe exacerbations. Even after adjusting for observable characteristics, individuals in the 97.5th percentile of exacerbation rate had 5.6 times higher rate of severe exacerbations than those in the 2.5th percentile. Using robust statistical methodology that controlled for heterogeneity in exacerbation rates among individuals, we demonstrated potential causal associations among past and future severe exacerbations, albeit the magnitude of association was noticeably lower than previously reported. The prevention of severe exacerbations has the potential to modify the disease trajectory.

  3. Pulse shape discrimination and classification methods for continuous depth of interaction encoding PET detectors.

    PubMed

    Roncali, Emilie; Phipps, Jennifer E; Marcu, Laura; Cherry, Simon R

    2012-10-21

    In previous work we demonstrated the potential of positron emission tomography (PET) detectors with depth-of-interaction (DOI) encoding capability based on phosphor-coated crystals. A DOI resolution of 8 mm full-width at half-maximum was obtained for 20 mm long scintillator crystals using a delayed charge integration linear regression method (DCI-LR). Phosphor-coated crystals modify the pulse shape to allow continuous DOI information determination, but the relationship between pulse shape and DOI is complex. We are therefore interested in developing a sensitive and robust method to estimate the DOI. Here, linear discriminant analysis (LDA) was implemented to classify the events based on information extracted from the pulse shape. Pulses were acquired with 2×2×20 mm(3) phosphor-coated crystals at five irradiation depths and characterized by their DCI values or Laguerre coefficients. These coefficients were obtained by expanding the pulses on a Laguerre basis set and constituted a unique signature for each pulse. The DOI of individual events was predicted using LDA based on Laguerre coefficients (Laguerre-LDA) or DCI values (DCI-LDA) as discriminant features. Predicted DOIs were compared to true irradiation depths. Laguerre-LDA showed higher sensitivity and accuracy than DCI-LDA and DCI-LR and was also more robust to predict the DOI of pulses with higher statistical noise due to low light levels (interaction depths further from the photodetector face). This indicates that Laguerre-LDA may be more suitable to DOI estimation in smaller crystals where lower collected light levels are expected. This novel approach is promising for calculating DOI using pulse shape discrimination in single-ended readout depth-encoding PET detectors.

  4. Pulse Shape Discrimination and Classification Methods for Continuous Depth of Interaction Encoding PET Detectors

    PubMed Central

    Roncali, Emilie; Phipps, Jennifer E.; Marcu, Laura; Cherry, Simon R.

    2012-01-01

    In previous work we demonstrated the potential of positron emission tomography (PET) detectors with depth-of-interaction (DOI) encoding capability based on phosphor-coated crystals. A DOI resolution of 8 mm full-width at half-maximum was obtained for 20 mm long scintillator crystals using a delayed charge integration linear regression method (DCI-LR). Phosphor-coated crystals modify the pulse shape to allow continuous DOI information determination, but the relationship between pulse shape and DOI is complex. We are therefore interested in developing a sensitive and robust method to estimate the DOI. Here, linear discriminant analysis (LDA) was implemented to classify the events based on information extracted from the pulse shape. Pulses were acquired with 2 × 2 × 20 mm3 phosphor-coated crystals at five irradiation depths and characterized by their DCI values or Laguerre coefficients. These coefficients were obtained by expanding the pulses on a Laguerre basis set and constituted a unique signature for each pulse. The DOI of individual events was predicted using LDA based on Laguerre coefficients (Laguerre-LDA) or DCI values (DCI-LDA) as discriminant features. Predicted DOIs were compared to true irradiation depths. Laguerre-LDA showed higher sensitivity and accuracy than DCI-LDA and DCI-LR and was also more robust to predict the DOI of pulses with higher statistical noise due to low light levels (interaction depths further from the photodetector face). This indicates that Laguerre-LDA may be more suitable to DOI estimation in smaller crystals where lower collected light levels are expected. This novel approach is promising for calculating DOI using pulse shape discrimination in single-ended readout depth-encoding PET detectors. PMID:23010690

  5. Automated Classification of Power Signals

    DTIC Science & Technology

    2008-06-01

    determine when a transient occurs. The identification of this signal can then be determined by an expert classifier and a series of these...the manual identification and classification of system events. Once events were located, the characteristics were examined to determine if system... identification code, which varies depending on the system classifier that is specified. Figure 3-7 provides an example of a Linux directory containing

  6. Moisture source classification of heavy precipitation events in Switzerland in the last 130 years (1871-2011)

    NASA Astrophysics Data System (ADS)

    Aemisegger, Franziska; Piaget, Nicolas

    2017-04-01

    A new weather-system oriented classification framework of extreme precipitation events leading to large-scale floods in Switzerland is presented on this poster. Thirty-six high impact floods in the last 130 years are assigned to three representative categories of atmospheric moisture origin and transport patterns. The methodology underlying this moisture source classification combines information of the airmass history in the twenty days preceding the precipitation event with humidity variations along the large-scale atmospheric transport systems in a Lagrangian approach. The classification scheme is defined using the 33-year ERA-Interim reanalysis dataset (1979-2011) and is then applied to the Twentieth Century Reanalysis (1871-2011) extreme precipitation events as well as the 36 selected floods. The three defined categories are characterised by different dominant moisture uptake regions including the North Atlantic, the Mediterranean and continental Europe. Furthermore, distinct anomalies in the large-scale atmospheric flow are associated with the different categories. The temporal variations in the relative importance of the three categories over the last 130 years provides new insights into the impact of changing climate conditions on the dynamical mechanisms leading to heavy precipitation in Switzerland.

  7. A Robust Deep Model for Improved Classification of AD/MCI Patients

    PubMed Central

    Li, Feng; Tran, Loc; Thung, Kim-Han; Ji, Shuiwang; Shen, Dinggang; Li, Jiang

    2015-01-01

    Accurate classification of Alzheimer’s Disease (AD) and its prodromal stage, Mild Cognitive Impairment (MCI), plays a critical role in possibly preventing progression of memory impairment and improving quality of life for AD patients. Among many research tasks, it is of particular interest to identify noninvasive imaging biomarkers for AD diagnosis. In this paper, we present a robust deep learning system to identify different progression stages of AD patients based on MRI and PET scans. We utilized the dropout technique to improve classical deep learning by preventing its weight co-adaptation, which is a typical cause of over-fitting in deep learning. In addition, we incorporated stability selection, an adaptive learning factor, and a multi-task learning strategy into the deep learning framework. We applied the proposed method to the ADNI data set and conducted experiments for AD and MCI conversion diagnosis. Experimental results showed that the dropout technique is very effective in AD diagnosis, improving the classification accuracies by 5.9% on average as compared to the classical deep learning methods. PMID:25955998

  8. Benign-malignant mass classification in mammogram using edge weighted local texture features

    NASA Astrophysics Data System (ADS)

    Rabidas, Rinku; Midya, Abhishek; Sadhu, Anup; Chakraborty, Jayasree

    2016-03-01

    This paper introduces novel Discriminative Robust Local Binary Pattern (DRLBP) and Discriminative Robust Local Ternary Pattern (DRLTP) for the classification of mammographic masses as benign or malignant. Mass is one of the common, however, challenging evidence of breast cancer in mammography and diagnosis of masses is a difficult task. Since DRLBP and DRLTP overcome the drawbacks of Local Binary Pattern (LBP) and Local Ternary Pattern (LTP) by discriminating a brighter object against the dark background and vice-versa, in addition to the preservation of the edge information along with the texture information, several edge-preserving texture features are extracted, in this study, from DRLBP and DRLTP. Finally, a Fisher Linear Discriminant Analysis method is incorporated with discriminating features, selected by stepwise logistic regression method, for the classification of benign and malignant masses. The performance characteristics of DRLBP and DRLTP features are evaluated using a ten-fold cross-validation technique with 58 masses from the mini-MIAS database, and the best result is observed with DRLBP having an area under the receiver operating characteristic curve of 0.982.

  9. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation.

    PubMed

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-08-16

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.

  10. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation

    PubMed Central

    Sun, Rui; Zhang, Guanghai; Yan, Xiaoxing; Gao, Jun

    2016-01-01

    Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods. PMID:27537888

  11. Certified Normal: Alzheimer’s Disease Biomarkers and Normative Estimates of Cognitive Functioning

    PubMed Central

    Hassenstab, Jason; Chasse, Rachel; Grabow, Perri; Benzinger, Tammie L.S.; Fagan, Anne M.; Xiong, Chengjie; Jasielec, Mateusz; Grant, Elizabeth; Morris, John C.

    2016-01-01

    Normative samples drawn from older populations may unintentionally include individuals with preclinical Alzheimer’s disease (AD) pathology, resulting in reduced means, increased variability, and overestimation of age-effects on cognitive performance. 264 cognitively normal (CDR=0) older adults were classified as biomarker-negative (“Robust Normal,” n=177) or biomarker-positive (“Preclinical Alzheimer’s Disease” (PCAD), n=87) based on amyloid imaging, cerebrospinal fluid biomarkers, and hippocampal volumes. PCAD participants performed worse than Robust Normals on nearly all cognitive measures. Removing PCAD participants from the normative sample yielded higher means and less variability on episodic memory, visuospatial ability, and executive functioning measures. These results were more pronounced in participants aged 75 and older. Notably, removing PCAD participants from the sample significantly reduced age effects across all cognitive domains. Applying norms from the Robust Normal sample to a separate cohort did not improve CDR classification when using standard deviation cutoff scores. Overall, removing individuals with biomarker evidence of preclinical AD improves normative sample quality and substantially reduces age-effects on cognitive performance, but provides no substantive benefit for diagnostic classifications. PMID:27255812

  12. Limb Position Tolerant Pattern Recognition for Myoelectric Prosthesis Control with Adaptive Sparse Representations From Extreme Learning.

    PubMed

    Betthauser, Joseph L; Hunt, Christopher L; Osborn, Luke E; Masters, Matthew R; Levay, Gyorgy; Kaliki, Rahul R; Thakor, Nitish V

    2018-04-01

    Myoelectric signals can be used to predict the intended movements of an amputee for prosthesis control. However, untrained effects like limb position changes influence myoelectric signal characteristics, hindering the ability of pattern recognition algorithms to discriminate among motion classes. Despite frequent and long training sessions, these deleterious conditional influences may result in poor performance and device abandonment. We present a robust sparsity-based adaptive classification method that is significantly less sensitive to signal deviations resulting from untrained conditions. We compare this approach in the offline and online contexts of untrained upper-limb positions for amputee and able-bodied subjects to demonstrate its robustness compared against other myoelectric classification methods. We report significant performance improvements () in untrained limb positions across all subject groups. The robustness of our suggested approach helps to ensure better untrained condition performance from fewer training conditions. This method of prosthesis control has the potential to deliver real-world clinical benefits to amputees: better condition-tolerant performance, reduced training burden in terms of frequency and duration, and increased adoption of myoelectric prostheses.

  13. Less can be more: How to make operations more flexible and robust with fewer resources

    NASA Astrophysics Data System (ADS)

    Haksöz, ćaǧrı; Katsikopoulos, Konstantinos; Gigerenzer, Gerd

    2018-06-01

    We review empirical evidence from practice and general theoretical conditions, under which simple rules of thumb can help to make operations flexible and robust. An operation is flexible when it responds adaptively to adverse events such as natural disasters; an operation is robust when it is less affected by adverse events in the first place. We illustrate the relationship between flexibility and robustness in the context of supply chain risk. In addition to increasing flexibility and robustness, simple rules simultaneously reduce the need for resources such as time, money, information, and computation. We illustrate the simple-rules approach with an easy-to-use graphical aid for diagnosing and managing supply chain risk. More generally, we recommend a four-step process for determining the amount of resources that decision makers should invest in so as to increase flexibility and robustness.

  14. Ensemble methods with simple features for document zone classification

    NASA Astrophysics Data System (ADS)

    Obafemi-Ajayi, Tayo; Agam, Gady; Xie, Bingqing

    2012-01-01

    Document layout analysis is of fundamental importance for document image understanding and information retrieval. It requires the identification of blocks extracted from a document image via features extraction and block classification. In this paper, we focus on the classification of the extracted blocks into five classes: text (machine printed), handwriting, graphics, images, and noise. We propose a new set of features for efficient classifications of these blocks. We present a comparative evaluation of three ensemble based classification algorithms (boosting, bagging, and combined model trees) in addition to other known learning algorithms. Experimental results are demonstrated for a set of 36503 zones extracted from 416 document images which were randomly selected from the tobacco legacy document collection. The results obtained verify the robustness and effectiveness of the proposed set of features in comparison to the commonly used Ocropus recognition features. When used in conjunction with the Ocropus feature set, we further improve the performance of the block classification system to obtain a classification accuracy of 99.21%.

  15. Dimensional Representation and Gradient Boosting for Seismic Event Classification

    NASA Astrophysics Data System (ADS)

    Semmelmayer, F. C.; Kappedal, R. D.; Magana-Zook, S. A.

    2017-12-01

    In this research, we conducted experiments of representational structures on 5009 seismic signals with the intent of finding a method to classify signals as either an explosion or an earthquake in an automated fashion. We also applied a gradient boosted classifier. While perfect classification was not attained (approximately 88% was our best model), some cases demonstrate that many events can be filtered out as very high probability being explosions or earthquakes, diminishing subject-matter experts'(SME) workload for first stage analysis. It is our hope that these methods can be refined, further increasing the classification probability.

  16. Multi-stage classification method oriented to aerial image based on low-rank recovery and multi-feature fusion sparse representation.

    PubMed

    Ma, Xu; Cheng, Yongmei; Hao, Shuai

    2016-12-10

    Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.

  17. Classification of the Regional Ionospheric Disturbance Based on Machine Learning Techniques

    NASA Astrophysics Data System (ADS)

    Terzi, Merve Begum; Arikan, Orhan; Karatay, Secil; Arikan, Feza; Gulyaeva, Tamara

    2016-08-01

    In this study, Total Electron Content (TEC) estimated from GPS receivers is used to model the regional and local variability that differs from global activity along with solar and geomagnetic indices. For the automated classification of regional disturbances, a classification technique based on a robust machine learning technique that have found wide spread use, Support Vector Machine (SVM) is proposed. Performance of developed classification technique is demonstrated for midlatitude ionosphere over Anatolia using TEC estimates generated from GPS data provided by Turkish National Permanent GPS Network (TNPGN-Active) for solar maximum year of 2011. As a result of implementing developed classification technique to Global Ionospheric Map (GIM) TEC data, which is provided by the NASA Jet Propulsion Laboratory (JPL), it is shown that SVM can be a suitable learning method to detect anomalies in TEC variations.

  18. Automatic Classification of Extensive Aftershock Sequences Using Empirical Matched Field Processing

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.; Harris, David B.; Kværna, Tormod; Dodge, Douglas A.

    2013-04-01

    The aftershock sequences that follow large earthquakes create considerable problems for data centers attempting to produce comprehensive event bulletins in near real-time. The greatly increased number of events which require processing can overwhelm analyst resources and reduce the capacity for analyzing events of monitoring interest. This exacerbates a potentially reduced detection capability at key stations, due the noise generated by the sequence, and a deterioration in the quality of the fully automatic preliminary event bulletins caused by the difficulty in associating the vast numbers of closely spaced arrivals over the network. Considerable success has been enjoyed by waveform correlation methods for the automatic identification of groups of events belonging to the same geographical source region, facilitating the more time-efficient analysis of event ensembles as opposed to individual events. There are, however, formidable challenges associated with the automation of correlation procedures. The signal generated by a very large earthquake seldom correlates well enough with the signals generated by far smaller aftershocks for a correlation detector to produce statistically significant triggers at the correct times. Correlation between events within clusters of aftershocks is significantly better, although the issues of when and how to initiate new pattern detectors are still being investigated. Empirical Matched Field Processing (EMFP) is a highly promising method for detecting event waveforms suitable as templates for correlation detectors. EMFP is a quasi-frequency-domain technique that calibrates the spatial structure of a wavefront crossing a seismic array in a collection of narrow frequency bands. The amplitude and phase weights that result are applied in a frequency-domain beamforming operation that compensates for scattering and refraction effects not properly modeled by plane-wave beams. It has been demonstrated to outperform waveform correlation as a classifier of ripple-fired mining blasts since the narrowband procedure is insensitive to differences in the source-time functions. For sequences in which the spectral content and time-histories of the signals from the main shock and aftershocks vary greatly, the spatial structure calibrated by EMFP is an invariant that permits reliable detection of events in the specific source region. Examples from the 2005 Kashmir and 2011 Van earthquakes demonstrate how EMFP templates from the main events detect arrivals from the aftershock sequences with high sensitivity and exceptionally low false alarm rates. Classical waveform correlation detectors are demonstrated to fail for these examples. Even arrivals with SNR below unity can produce significant EMFP triggers as the spatial pattern of the incoming wavefront is identified, leading to robust detections at a greater number of stations and potentially more reliable automatic bulletins. False EMFP triggers are readily screened by scanning a space of phase shifts relative to the imposed template. EMFP has the potential to produce a rapid and robust overview of the evolving aftershock sequence such that correlation and subspace detectors can be applied semi-autonomously, with well-chosen parameter specifications, to identify and classify clusters of very closely spaced aftershocks.

  19. 32 CFR 2700.13 - Duration of original classification.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... STATUS NEGOTIATIONS SECURITY INFORMATION REGULATIONS Original Classification § 2700.13 Duration of original classification. (a) Information or material which is classified after December 1, 1978, shall be... declassification. This date or event shall be as early as national security permits and shall be no more than six...

  20. Fusion of multiple quadratic penalty function support vector machines (QPFSVM) for automated sea mine detection and classification

    NASA Astrophysics Data System (ADS)

    Dobeck, Gerald J.; Cobb, J. Tory

    2002-08-01

    The high-resolution sonar is one of the principal sensors used by the Navy to detect and classify sea mines in minehunting operations. For such sonar systems, substantial effort has been devoted to the development of automated detection and classification (D/C) algorithms. These have been spurred by several factors including (1) aids for operators to reduce work overload, (2) more optimal use of all available data, and (3) the introduction of unmanned minehunting systems. The environments where sea mines are typically laid (harbor areas, shipping lanes, and the littorals) give rise to many false alarms caused by natural, biologic, and man-made clutter. The objective of the automated D/C algorithms is to eliminate most of these false alarms while still maintaining a very high probability of mine detection and classification (PdPc). In recent years, the benefits of fusing the outputs of multiple D/C algorithms have been studied. We refer to this as Algorithm Fusion. The results have been remarkable, including reliable robustness to new environments. The Quadratic Penalty Function Support Vector Machine (QPFSVM) algorithm to aid in the automated detection and classification of sea mines is introduced in this paper. The QPFSVM algorithm is easy to train, simple to implement, and robust to feature space dimension. Outputs of successive SVM algorithms are cascaded in stages (fused) to improve the Probability of Classification (Pc) and reduce the number of false alarms. Even though our experience has been gained in the area of sea mine detection and classification, the principles described herein are general and can be applied to fusion of any D/C problem (e.g., automated medical diagnosis or automatic target recognition for ballistic missile defense).

  1. A robust probabilistic collaborative representation based classification for multimodal biometrics

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Liu, Huanxi; Ding, Derui; Xiao, Jianli

    2018-04-01

    Most of the traditional biometric recognition systems perform recognition with a single biometric indicator. These systems have suffered noisy data, interclass variations, unacceptable error rates, forged identity, and so on. Due to these inherent problems, it is not valid that many researchers attempt to enhance the performance of unimodal biometric systems with single features. Thus, multimodal biometrics is investigated to reduce some of these defects. This paper proposes a new multimodal biometric recognition approach by fused faces and fingerprints. For more recognizable features, the proposed method extracts block local binary pattern features for all modalities, and then combines them into a single framework. For better classification, it employs the robust probabilistic collaborative representation based classifier to recognize individuals. Experimental results indicate that the proposed method has improved the recognition accuracy compared to the unimodal biometrics.

  2. Classifying Adverse Events in the Dental Office.

    PubMed

    Kalenderian, Elsbeth; Obadan-Udoh, Enihomo; Maramaldi, Peter; Etolue, Jini; Yansane, Alfa; Stewart, Denice; White, Joel; Vaderhobli, Ram; Kent, Karla; Hebballi, Nutan B; Delattre, Veronique; Kahn, Maria; Tokede, Oluwabunmi; Ramoni, Rachel B; Walji, Muhammad F

    2017-06-30

    Dentists strive to provide safe and effective oral healthcare. However, some patients may encounter an adverse event (AE) defined as "unnecessary harm due to dental treatment." In this research, we propose and evaluate two systems for categorizing the type and severity of AEs encountered at the dental office. Several existing medical AE type and severity classification systems were reviewed and adapted for dentistry. Using data collected in previous work, two initial dental AE type and severity classification systems were developed. Eight independent reviewers performed focused chart reviews, and AEs identified were used to evaluate and modify these newly developed classifications. A total of 958 charts were independently reviewed. Among the reviewed charts, 118 prospective AEs were found and 101 (85.6%) were verified as AEs through a consensus process. At the end of the study, a final AE type classification comprising 12 categories, and an AE severity classification comprising 7 categories emerged. Pain and infection were the most common AE types representing 73% of the cases reviewed (56% and 17%, respectively) and 88% were found to cause temporary, moderate to severe harm to the patient. Adverse events found during the chart review process were successfully classified using the novel dental AE type and severity classifications. Understanding the type of AEs and their severity are important steps if we are to learn from and prevent patient harm in the dental office.

  3. Classification of Nortes in the Gulf of Mexico derived from wave energy maps

    NASA Astrophysics Data System (ADS)

    Appendini, C. M.; Hernández-Lasheras, J.

    2016-02-01

    Extreme wave climate in the Gulf of Mexico is determined by tropical cyclones and winds from the Central American Cold Surges, locally referred to as Nortes. While hurricanes can have catastrophic effects, extreme waves and storm surge from Nortes occur several times a year, and thus have greater impacts on human activities along the Mexican coast of the Gulf of Mexico. Despite the constant impacts from Nortes, there is no available classification that relates their characteristics (e.g. pressure gradients, wind speed), to the associated coastal impacts. This work presents a first approximation to characterize and classify Nortes, which is based on the assumption that the derived wave energy synthetizes information (i.e. wind intensity, direction and duration) of individual Norte events as they pass through the Gulf of Mexico. First, we developed an index to identify Nortes based on surface pressure differences of two locations. To validate the methodology we compared the events identified with other studies and available Nortes logs. Afterwards, we detected Nortes from the 1986/1987, 2008/2009 and 2009/2010 seasons and used their corresponding wind fields to derive the wave energy maps using a numerical wave model. We used the energy maps to classify the events into groups using manual (visual) and automatic classifications (principal component analysis and k-means). The manual classification identified 3 types of Nortes and the automatic classification identified 5, although 3 of them had a high degree of similarity. The principal component analysis indicated that all events have similar characteristics, as few components are necessary to explain almost all of the variance. The classification from the k-means indicated that 81% of analyzed Nortes affect the southeastern Gulf of Mexico, while a smaller percentage affects the northern Gulf of Mexico and even less affect the western Caribbean.

  4. Classification of single-trial auditory events using dry-wireless EEG during real and motion simulated flight.

    PubMed

    Callan, Daniel E; Durantin, Gautier; Terzibas, Cengiz

    2015-01-01

    Application of neuro-augmentation technology based on dry-wireless EEG may be considerably beneficial for aviation and space operations because of the inherent dangers involved. In this study we evaluate classification performance of perceptual events using a dry-wireless EEG system during motion platform based flight simulation and actual flight in an open cockpit biplane to determine if the system can be used in the presence of considerable environmental and physiological artifacts. A passive task involving 200 random auditory presentations of a chirp sound was used for evaluation. The advantage of this auditory task is that it does not interfere with the perceptual motor processes involved with piloting the plane. Classification was based on identifying the presentation of a chirp sound vs. silent periods. Evaluation of Independent component analysis (ICA) and Kalman filtering to enhance classification performance by extracting brain activity related to the auditory event from other non-task related brain activity and artifacts was assessed. The results of permutation testing revealed that single trial classification of presence or absence of an auditory event was significantly above chance for all conditions on a novel test set. The best performance could be achieved with both ICA and Kalman filtering relative to no processing: Platform Off (83.4% vs. 78.3%), Platform On (73.1% vs. 71.6%), Biplane Engine Off (81.1% vs. 77.4%), and Biplane Engine On (79.2% vs. 66.1%). This experiment demonstrates that dry-wireless EEG can be used in environments with considerable vibration, wind, acoustic noise, and physiological artifacts and achieve good single trial classification performance that is necessary for future successful application of neuro-augmentation technology based on brain-machine interfaces.

  5. Lauren classification and individualized chemotherapy in gastric cancer.

    PubMed

    Ma, Junli; Shen, Hong; Kapesa, Linda; Zeng, Shan

    2016-05-01

    Gastric cancer is one of the most common malignancies worldwide. During the last 50 years, the histological classification of gastric carcinoma has been largely based on Lauren's criteria, in which gastric cancer is classified into two major histological subtypes, namely intestinal type and diffuse type adenocarcinoma. This classification was introduced in 1965, and remains currently widely accepted and employed, since it constitutes a simple and robust classification approach. The two histological subtypes of gastric cancer proposed by the Lauren classification exhibit a number of distinct clinical and molecular characteristics, including histogenesis, cell differentiation, epidemiology, etiology, carcinogenesis, biological behaviors and prognosis. Gastric cancer exhibits varied sensitivity to chemotherapy drugs and significant heterogeneity; therefore, the disease may be a target for individualized therapy. The Lauren classification may provide the basis for individualized treatment for advanced gastric cancer, which is increasingly gaining attention in the scientific field. However, few studies have investigated individualized treatment that is guided by pathological classification. The aim of the current review is to analyze the two major histological subtypes of gastric cancer, as proposed by the Lauren classification, and to discuss the implications of this for personalized chemotherapy.

  6. Hyperspectral Image Classification via Multitask Joint Sparse Representation and Stepwise MRF Optimization.

    PubMed

    Yuan, Yuan; Lin, Jianzhe; Wang, Qi

    2016-12-01

    Hyperspectral image (HSI) classification is a crucial issue in remote sensing. Accurate classification benefits a large number of applications such as land use analysis and marine resource utilization. But high data correlation brings difficulty to reliable classification, especially for HSI with abundant spectral information. Furthermore, the traditional methods often fail to well consider the spatial coherency of HSI that also limits the classification performance. To address these inherent obstacles, a novel spectral-spatial classification scheme is proposed in this paper. The proposed method mainly focuses on multitask joint sparse representation (MJSR) and a stepwise Markov random filed framework, which are claimed to be two main contributions in this procedure. First, the MJSR not only reduces the spectral redundancy, but also retains necessary correlation in spectral field during classification. Second, the stepwise optimization further explores the spatial correlation that significantly enhances the classification accuracy and robustness. As far as several universal quality evaluation indexes are concerned, the experimental results on Indian Pines and Pavia University demonstrate the superiority of our method compared with the state-of-the-art competitors.

  7. Real-time classification of signals from three-component seismic sensors using neural nets

    NASA Astrophysics Data System (ADS)

    Bowman, B. C.; Dowla, F.

    1992-05-01

    Adaptive seismic data acquisition systems with capabilities of signal discrimination and event classification are important in treaty monitoring, proliferation, and earthquake early detection systems. Potential applications include monitoring underground chemical explosions, as well as other military, cultural, and natural activities where characteristics of signals change rapidly and without warning. In these applications, the ability to detect and interpret events rapidly without falling behind the influx of the data is critical. We developed a system for real-time data acquisition, analysis, learning, and classification of recorded events employing some of the latest technology in computer hardware, software, and artificial neural networks methods. The system is able to train dynamically, and updates its knowledge based on new data. The software is modular and hardware-independent; i.e., the front-end instrumentation is transparent to the analysis system. The software is designed to take advantage of the multiprocessing environment of the Unix operating system. The Unix System V shared memory and static RAM protocols for data access and the semaphore mechanism for interprocess communications were used. As the three-component sensor detects a seismic signal, it is displayed graphically on a color monitor using X11/Xlib graphics with interactive screening capabilities. For interesting events, the triaxial signal polarization is computed, a fast Fourier Transform (FFT) algorithm is applied, and the normalized power spectrum is transmitted to a backpropagation neural network for event classification. The system is currently capable of handling three data channels with a sampling rate of 500 Hz, which covers the bandwidth of most seismic events. The system has been tested in laboratory setting with artificial events generated in the vicinity of a three-component sensor.

  8. Automatic plankton image classification combining multiple view features via multiple kernel learning.

    PubMed

    Zheng, Haiyong; Wang, Ruchen; Yu, Zhibin; Wang, Nan; Gu, Zhaorui; Zheng, Bing

    2017-12-28

    Plankton, including phytoplankton and zooplankton, are the main source of food for organisms in the ocean and form the base of marine food chain. As the fundamental components of marine ecosystems, plankton is very sensitive to environment changes, and the study of plankton abundance and distribution is crucial, in order to understand environment changes and protect marine ecosystems. This study was carried out to develop an extensive applicable plankton classification system with high accuracy for the increasing number of various imaging devices. Literature shows that most plankton image classification systems were limited to only one specific imaging device and a relatively narrow taxonomic scope. The real practical system for automatic plankton classification is even non-existent and this study is partly to fill this gap. Inspired by the analysis of literature and development of technology, we focused on the requirements of practical application and proposed an automatic system for plankton image classification combining multiple view features via multiple kernel learning (MKL). For one thing, in order to describe the biomorphic characteristics of plankton more completely and comprehensively, we combined general features with robust features, especially by adding features like Inner-Distance Shape Context for morphological representation. For another, we divided all the features into different types from multiple views and feed them to multiple classifiers instead of only one by combining different kernel matrices computed from different types of features optimally via multiple kernel learning. Moreover, we also applied feature selection method to choose the optimal feature subsets from redundant features for satisfying different datasets from different imaging devices. We implemented our proposed classification system on three different datasets across more than 20 categories from phytoplankton to zooplankton. The experimental results validated that our system outperforms state-of-the-art plankton image classification systems in terms of accuracy and robustness. This study demonstrated automatic plankton image classification system combining multiple view features using multiple kernel learning. The results indicated that multiple view features combined by NLMKL using three kernel functions (linear, polynomial and Gaussian kernel functions) can describe and use information of features better so that achieve a higher classification accuracy.

  9. Towards an Automated Classification of Transient Events in Synoptic Sky Surveys

    NASA Technical Reports Server (NTRS)

    Djorgovski, S. G.; Donalek, C.; Mahabal, A. A.; Moghaddam, B.; Turmon, M.; Graham, M. J.; Drake, A. J.; Sharma, N.; Chen, Y.

    2011-01-01

    We describe the development of a system for an automated, iterative, real-time classification of transient events discovered in synoptic sky surveys. The system under development incorporates a number of Machine Learning techniques, mostly using Bayesian approaches, due to the sparse nature, heterogeneity, and variable incompleteness of the available data. The classifications are improved iteratively as the new measurements are obtained. One novel featrue is the development of an automated follow-up recommendation engine, that suggest those measruements that would be the most advantageous in terms of resolving classification ambiguities and/or characterization of the astrophysically most interesting objects, given a set of available follow-up assets and their cost funcations. This illustrates the symbiotic relationship of astronomy and applied computer science through the emerging disciplne of AstroInformatics.

  10. Electrocardiogram ST-Segment Morphology Delineation Method Using Orthogonal Transformations

    PubMed Central

    2016-01-01

    Differentiation between ischaemic and non-ischaemic transient ST segment events of long term ambulatory electrocardiograms is a persisting weakness in present ischaemia detection systems. Traditional ST segment level measuring is not a sufficiently precise technique due to the single point of measurement and severe noise which is often present. We developed a robust noise resistant orthogonal-transformation based delineation method, which allows tracing the shape of transient ST segment morphology changes from the entire ST segment in terms of diagnostic and morphologic feature-vector time series, and also allows further analysis. For these purposes, we developed a new Legendre Polynomials based Transformation (LPT) of ST segment. Its basis functions have similar shapes to typical transient changes of ST segment morphology categories during myocardial ischaemia (level, slope and scooping), thus providing direct insight into the types of time domain morphology changes through the LPT feature-vector space. We also generated new Karhunen and Lo ève Transformation (KLT) ST segment basis functions using a robust covariance matrix constructed from the ST segment pattern vectors derived from the Long Term ST Database (LTST DB). As for the delineation of significant transient ischaemic and non-ischaemic ST segment episodes, we present a study on the representation of transient ST segment morphology categories, and an evaluation study on the classification power of the KLT- and LPT-based feature vectors to classify between ischaemic and non-ischaemic ST segment episodes of the LTST DB. Classification accuracy using the KLT and LPT feature vectors was 90% and 82%, respectively, when using the k-Nearest Neighbors (k = 3) classifier and 10-fold cross-validation. New sets of feature-vector time series for both transformations were derived for the records of the LTST DB which is freely available on the PhysioNet website and were contributed to the LTST DB. The KLT and LPT present new possibilities for human-expert diagnostics, and for automated ischaemia detection. PMID:26863140

  11. Computational Sensing and in vitro Classification of GMOs and Biomolecular Events

    DTIC Science & Technology

    2008-12-01

    COMPUTATIONAL SENSING AND IN VITRO CLASSIFICATION OF GMOs AND BIOMOLECULAR EVENTS Elebeoba May1∗, Miler T. Lee2†, Patricia Dolan1, Paul Crozier1...modified organisms ( GMOs ) in the pres- ence of non-lethal agents. Using an information and coding- theoretic framework we develop a de novo method for...high through- put screening, distinguishing genetically modified organisms ( GMOs ), molecular computing, differentiating biological mark- ers

  12. FHR patterns that become significant in connection with ST waveform changes and metabolic acidosis at birth.

    PubMed

    Rosén, Karl G; Norén, Håkan; Carlsson, Ann

    2018-04-18

    Recent developments have produced new CTG classification systems and the question is to what extent these may affect the model of FHR + ST interpretation? The two new systems (FIGO2015 and SSOG2017) classify FHR + ST events differently from the current CTG classification system used in the STAN interpretation algorithm (STAN2007). Identify the predominant FHR patterns in connection with ST events in cases of cord artery metabolic acidosis missed by the different CTG classification systems. Indicate to what extent STAN clinical guidelines could be modified enhancing the sensitivity. Provide a pathophysiological rationale. Forty-four cases with umbilical cord artery metabolic acidosis were retrieved from a European multicenter database. Significant FHR + ST events were evaluated post hoc in consensus by an expert panel. Eighteen cases were not identified as in need of intervention and regarded as negative in the sensitivity analysis. In 12 cases, ST changes occurred but the CTG was regarded as reassuring. Visual analysis of the FHR + ST tracings revealed specific FHR patterns: Conclusion: These findings indicate FHR + ST analysis may be undertaken regardless of CTG classification system provided there is a more physiologically oriented approach to FHR assessment in connection with an ST event.

  13. Event Classification and Identification Based on the Characteristic Ellipsoid of Phasor Measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Jian; Diao, Ruisheng; Makarov, Yuri V.

    2011-09-23

    In this paper, a method to classify and identify power system events based on the characteristic ellipsoid of phasor measurement is presented. The decision tree technique is used to perform the event classification and identification. Event types, event locations and clearance times are identified by decision trees based on the indices of the characteristic ellipsoid. A sufficiently large number of transient events were simulated on the New England 10-machine 39-bus system based on different system configurations. Transient simulations taking into account different event types, clearance times and various locations are conducted to simulate phasor measurement. Bus voltage magnitudes and recordedmore » reactive and active power flows are used to build the characteristic ellipsoid. The volume, eccentricity, center and projection of the longest axis in the parameter space coordinates of the characteristic ellipsoids are used to classify and identify events. Results demonstrate that the characteristic ellipsoid and the decision tree are capable to detect the event type, location, and clearance time with very high accuracy.« less

  14. Traffic sign classification with dataset augmentation and convolutional neural network

    NASA Astrophysics Data System (ADS)

    Tang, Qing; Kurnianggoro, Laksono; Jo, Kang-Hyun

    2018-04-01

    This paper presents a method for traffic sign classification using a convolutional neural network (CNN). In this method, firstly we transfer a color image into grayscale, and then normalize it in the range (-1,1) as the preprocessing step. To increase robustness of classification model, we apply a dataset augmentation algorithm and create new images to train the model. To avoid overfitting, we utilize a dropout module before the last fully connection layer. To assess the performance of the proposed method, the German traffic sign recognition benchmark (GTSRB) dataset is utilized. Experimental results show that the method is effective in classifying traffic signs.

  15. SNR-adaptive stream weighting for audio-MES ASR.

    PubMed

    Lee, Ki-Seung

    2008-08-01

    Myoelectric signals (MESs) from the speaker's mouth region have been successfully shown to improve the noise robustness of automatic speech recognizers (ASRs), thus promising to extend their usability in implementing noise-robust ASR. In the recognition system presented herein, extracted audio and facial MES features were integrated by a decision fusion method, where the likelihood score of the audio-MES observation vector was given by a linear combination of class-conditional observation log-likelihoods of two classifiers, using appropriate weights. We developed a weighting process adaptive to SNRs. The main objective of the paper involves determining the optimal SNR classification boundaries and constructing a set of optimum stream weights for each SNR class. These two parameters were determined by a method based on a maximum mutual information criterion. Acoustic and facial MES data were collected from five subjects, using a 60-word vocabulary. Four types of acoustic noise including babble, car, aircraft, and white noise were acoustically added to clean speech signals with SNR ranging from -14 to 31 dB. The classification accuracy of the audio ASR was as low as 25.5%. Whereas, the classification accuracy of the MES ASR was 85.2%. The classification accuracy could be further improved by employing the proposed audio-MES weighting method, which was as high as 89.4% in the case of babble noise. A similar result was also found for the other types of noise.

  16. Using Knowledge Base for Event-Driven Scheduling of Web Monitoring Systems

    NASA Astrophysics Data System (ADS)

    Kim, Yang Sok; Kang, Sung Won; Kang, Byeong Ho; Compton, Paul

    Web monitoring systems report any changes to their target web pages by revisiting them frequently. As they operate under significant resource constraints, it is essential to minimize revisits while ensuring minimal delay and maximum coverage. Various statistical scheduling methods have been proposed to resolve this problem; however, they are static and cannot easily cope with events in the real world. This paper proposes a new scheduling method that manages unpredictable events. An MCRDR (Multiple Classification Ripple-Down Rules) document classification knowledge base was reused to detect events and to initiate a prompt web monitoring process independent of a static monitoring schedule. Our experiment demonstrates that the approach improves monitoring efficiency significantly.

  17. Single-trial classification of auditory event-related potentials elicited by stimuli from different spatial directions.

    PubMed

    Cabrera, Alvaro Fuentes; Hoffmann, Pablo Faundez

    2010-01-01

    This study is focused on the single-trial classification of auditory event-related potentials elicited by sound stimuli from different spatial directions. Five naϊve subjects were asked to localize a sound stimulus reproduced over one of 8 loudspeakers placed in a circular array, equally spaced by 45°. The subject was seating in the center of the circular array. Due to the complexity of an eight classes classification, our approach consisted on feeding our classifier with two classes, or spatial directions, at the time. The seven chosen pairs were 0°, which was the loudspeaker directly in front of the subject, with all the other seven directions. The discrete wavelet transform was used to extract features in the time-frequency domain and a support vector machine performed the classification procedure. The average accuracy over all subjects and all pair of spatial directions was 76.5%, σ = 3.6. The results of this study provide evidence that the direction of a sound is encoded in single-trial auditory event-related potentials.

  18. Flying insect detection and classification with inexpensive sensors.

    PubMed

    Chen, Yanping; Why, Adena; Batista, Gustavo; Mafra-Neto, Agenor; Keogh, Eamonn

    2014-10-15

    An inexpensive, noninvasive system that could accurately classify flying insects would have important implications for entomological research, and allow for the development of many useful applications in vector and pest control for both medical and agricultural entomology. Given this, the last sixty years have seen many research efforts devoted to this task. To date, however, none of this research has had a lasting impact. In this work, we show that pseudo-acoustic optical sensors can produce superior data; that additional features, both intrinsic and extrinsic to the insect's flight behavior, can be exploited to improve insect classification; that a Bayesian classification approach allows to efficiently learn classification models that are very robust to over-fitting, and a general classification framework allows to easily incorporate arbitrary number of features. We demonstrate the findings with large-scale experiments that dwarf all previous works combined, as measured by the number of insects and the number of species considered.

  19. New feature extraction method for classification of agricultural products from x-ray images

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.

    1999-01-01

    Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.

  20. Flying Insect Detection and Classification with Inexpensive Sensors

    PubMed Central

    Chen, Yanping; Why, Adena; Batista, Gustavo; Mafra-Neto, Agenor; Keogh, Eamonn

    2014-01-01

    An inexpensive, noninvasive system that could accurately classify flying insects would have important implications for entomological research, and allow for the development of many useful applications in vector and pest control for both medical and agricultural entomology. Given this, the last sixty years have seen many research efforts devoted to this task. To date, however, none of this research has had a lasting impact. In this work, we show that pseudo-acoustic optical sensors can produce superior data; that additional features, both intrinsic and extrinsic to the insect’s flight behavior, can be exploited to improve insect classification; that a Bayesian classification approach allows to efficiently learn classification models that are very robust to over-fitting, and a general classification framework allows to easily incorporate arbitrary number of features. We demonstrate the findings with large-scale experiments that dwarf all previous works combined, as measured by the number of insects and the number of species considered. PMID:25350921

  1. Accelerometer and Camera-Based Strategy for Improved Human Fall Detection.

    PubMed

    Zerrouki, Nabil; Harrou, Fouzi; Sun, Ying; Houacine, Amrane

    2016-12-01

    In this paper, we address the problem of detecting human falls using anomaly detection. Detection and classification of falls are based on accelerometric data and variations in human silhouette shape. First, we use the exponentially weighted moving average (EWMA) monitoring scheme to detect a potential fall in the accelerometric data. We used an EWMA to identify features that correspond with a particular type of fall allowing us to classify falls. Only features corresponding with detected falls were used in the classification phase. A benefit of using a subset of the original data to design classification models minimizes training time and simplifies models. Based on features corresponding to detected falls, we used the support vector machine (SVM) algorithm to distinguish between true falls and fall-like events. We apply this strategy to the publicly available fall detection databases from the university of Rzeszow's. Results indicated that our strategy accurately detected and classified fall events, suggesting its potential application to early alert mechanisms in the event of fall situations and its capability for classification of detected falls. Comparison of the classification results using the EWMA-based SVM classifier method with those achieved using three commonly used machine learning classifiers, neural network, K-nearest neighbor and naïve Bayes, proved our model superior.

  2. Feature selection and classification of multiparametric medical images using bagging and SVM

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Resnick, Susan M.; Davatzikos, Christos

    2008-03-01

    This paper presents a framework for brain classification based on multi-parametric medical images. This method takes advantage of multi-parametric imaging to provide a set of discriminative features for classifier construction by using a regional feature extraction method which takes into account joint correlations among different image parameters; in the experiments herein, MRI and PET images of the brain are used. Support vector machine classifiers are then trained based on the most discriminative features selected from the feature set. To facilitate robust classification and optimal selection of parameters involved in classification, in view of the well-known "curse of dimensionality", base classifiers are constructed in a bagging (bootstrap aggregating) framework for building an ensemble classifier and the classification parameters of these base classifiers are optimized by means of maximizing the area under the ROC (receiver operating characteristic) curve estimated from their prediction performance on left-out samples of bootstrap sampling. This classification system is tested on a sex classification problem, where it yields over 90% classification rates for unseen subjects. The proposed classification method is also compared with other commonly used classification algorithms, with favorable results. These results illustrate that the methods built upon information jointly extracted from multi-parametric images have the potential to perform individual classification with high sensitivity and specificity.

  3. An objective and parsimonious approach for classifying natural flow regimes at a continental scale

    NASA Astrophysics Data System (ADS)

    Archfield, S. A.; Kennen, J.; Carlisle, D.; Wolock, D.

    2013-12-01

    Hydroecological stream classification--the process of grouping streams by similar hydrologic responses and, thereby, similar aquatic habitat--has been widely accepted and is often one of the first steps towards developing ecological flow targets. Despite its importance, the last national classification of streamgauges was completed about 20 years ago. A new classification of 1,534 streamgauges in the contiguous United States is presented using a novel and parsimonious approach to understand similarity in ecological streamflow response. This new classification approach uses seven fundamental daily streamflow statistics (FDSS) rather than winnowing down an uncorrelated subset from 200 or more ecologically relevant streamflow statistics (ERSS) commonly used in hydroecological classification studies. The results of this investigation demonstrate that the distributions of 33 tested ERSS are consistently different among the classes derived from the seven FDSS. It is further shown that classification based solely on the 33 ERSS generally does a poorer job in grouping similar streamgauges than the classification based on the seven FDSS. This new classification approach has the additional advantages of overcoming some of the subjectivity associated with the selection of the classification variables and provides a set of robust continental-scale classes of US streamgauges.

  4. Learning Robust and Discriminative Subspace With Low-Rank Constraints.

    PubMed

    Li, Sheng; Fu, Yun

    2016-11-01

    In this paper, we aim at learning robust and discriminative subspaces from noisy data. Subspace learning is widely used in extracting discriminative features for classification. However, when data are contaminated with severe noise, the performance of most existing subspace learning methods would be limited. Recent advances in low-rank modeling provide effective solutions for removing noise or outliers contained in sample sets, which motivates us to take advantage of low-rank constraints in order to exploit robust and discriminative subspace for classification. In particular, we present a discriminative subspace learning method called the supervised regularization-based robust subspace (SRRS) approach, by incorporating the low-rank constraint. SRRS seeks low-rank representations from the noisy data, and learns a discriminative subspace from the recovered clean data jointly. A supervised regularization function is designed to make use of the class label information, and therefore to enhance the discriminability of subspace. Our approach is formulated as a constrained rank-minimization problem. We design an inexact augmented Lagrange multiplier optimization algorithm to solve it. Unlike the existing sparse representation and low-rank learning methods, our approach learns a low-dimensional subspace from recovered data, and explicitly incorporates the supervised information. Our approach and some baselines are evaluated on the COIL-100, ALOI, Extended YaleB, FERET, AR, and KinFace databases. The experimental results demonstrate the effectiveness of our approach, especially when the data contain considerable noise or variations.

  5. The Effect of Normalization in Violence Video Classification Performance

    NASA Astrophysics Data System (ADS)

    Ali, Ashikin; Senan, Norhalina

    2017-08-01

    Basically, data pre-processing is an important part of data mining. Normalization is a pre-processing stage for any type of problem statement, especially in video classification. Challenging problems that arises in video classification is because of the heterogeneous content, large variations in video quality and complex semantic meanings of the concepts involved. Therefore, to regularize this problem, it is thoughtful to ensure normalization or basically involvement of thorough pre-processing stage aids the robustness of classification performance. This process is to scale all the numeric variables into certain range to make it more meaningful for further phases in available data mining techniques. Thus, this paper attempts to examine the effect of 2 normalization techniques namely Min-max normalization and Z-score in violence video classifications towards the performance of classification rate using Multi-layer perceptron (MLP) classifier. Using Min-Max Normalization range of [0,1] the result shows almost 98% of accuracy, meanwhile Min-Max Normalization range of [-1,1] accuracy is 59% and for Z-score the accuracy is 50%.

  6. Classifying environmentally significant urban land uses with satellite imagery.

    PubMed

    Park, Mi-Hyun; Stenstrom, Michael K

    2008-01-01

    We investigated Bayesian networks to classify urban land use from satellite imagery. Landsat Enhanced Thematic Mapper Plus (ETM(+)) images were used for the classification in two study areas: (1) Marina del Rey and its vicinity in the Santa Monica Bay Watershed, CA and (2) drainage basins adjacent to the Sweetwater Reservoir in San Diego, CA. Bayesian networks provided 80-95% classification accuracy for urban land use using four different classification systems. The classifications were robust with small training data sets with normal and reduced radiometric resolution. The networks needed only 5% of the total data (i.e., 1500 pixels) for sample size and only 5- or 6-bit information for accurate classification. The network explicitly showed the relationship among variables from its structure and was also capable of utilizing information from non-spectral data. The classification can be used to provide timely and inexpensive land use information over large areas for environmental purposes such as estimating stormwater pollutant loads.

  7. An efficient classification method based on principal component and sparse representation.

    PubMed

    Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang

    2016-01-01

    As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.

  8. Epigenetic Biomarker to Support Classification into Pluripotent and Non-Pluripotent Cells

    NASA Astrophysics Data System (ADS)

    Lenz, Michael; Goetzke, Roman; Schenk, Arne; Schubert, Claudia; Veeck, Jürgen; Hemeda, Hatim; Koschmieder, Steffen; Zenke, Martin; Schuppert, Andreas; Wagner, Wolfgang

    2015-03-01

    Quality control of human induced pluripotent stem cells (iPSCs) can be performed by several methods. These methods are usually relatively labor-intensive, difficult to standardize, or they do not facilitate reliable quantification. Here, we describe a biomarker to distinguish between pluripotent and non-pluripotent cells based on DNA methylation (DNAm) levels at only three specific CpG sites. Two of these CpG sites were selected by their discriminatory power in 258 DNAm profiles - they were either methylated in pluripotent or non-pluripotent cells. The difference between these two β-values provides an Epi-Pluri-Score that was validated on independent DNAm-datasets (264 pluripotent and 1,951 non-pluripotent samples) with 99.9% specificity and 98.9% sensitivity. This score was complemented by a third CpG within the gene POU5F1 (OCT4), which better demarcates early differentiation events. We established pyrosequencing assays for the three relevant CpG sites and thereby correctly classified DNA of 12 pluripotent cell lines and 31 non-pluripotent cell lines. Furthermore, DNAm changes at these three CpGs were tracked in the course of differentiation of iPSCs towards mesenchymal stromal cells. The Epi-Pluri-Score does not give information on lineage-specific differentiation potential, but it provides a simple, reliable, and robust biomarker to support high-throughput classification into either pluripotent or non-pluripotent cells.

  9. Robust Feature Selection Technique using Rank Aggregation.

    PubMed

    Sarkar, Chandrima; Cooley, Sarah; Srivastava, Jaideep

    2014-01-01

    Although feature selection is a well-developed research area, there is an ongoing need to develop methods to make classifiers more efficient. One important challenge is the lack of a universal feature selection technique which produces similar outcomes with all types of classifiers. This is because all feature selection techniques have individual statistical biases while classifiers exploit different statistical properties of data for evaluation. In numerous situations this can put researchers into dilemma as to which feature selection method and a classifiers to choose from a vast range of choices. In this paper, we propose a technique that aggregates the consensus properties of various feature selection methods to develop a more optimal solution. The ensemble nature of our technique makes it more robust across various classifiers. In other words, it is stable towards achieving similar and ideally higher classification accuracy across a wide variety of classifiers. We quantify this concept of robustness with a measure known as the Robustness Index (RI). We perform an extensive empirical evaluation of our technique on eight data sets with different dimensions including Arrythmia, Lung Cancer, Madelon, mfeat-fourier, internet-ads, Leukemia-3c and Embryonal Tumor and a real world data set namely Acute Myeloid Leukemia (AML). We demonstrate not only that our algorithm is more robust, but also that compared to other techniques our algorithm improves the classification accuracy by approximately 3-4% (in data set with less than 500 features) and by more than 5% (in data set with more than 500 features), across a wide range of classifiers.

  10. Automatic classification of apnea/hypopnea events through sleep/wake states and severity of SDB from a pulse oximeter.

    PubMed

    Park, Jong-Uk; Lee, Hyo-Ki; Lee, Junghun; Urtnasan, Erdenebayar; Kim, Hojoong; Lee, Kyoung-Joung

    2015-09-01

    This study proposes a method of automatically classifying sleep apnea/hypopnea events based on sleep states and the severity of sleep-disordered breathing (SDB) using photoplethysmogram (PPG) and oxygen saturation (SpO2) signals acquired from a pulse oximeter. The PPG was used to classify sleep state, while the severity of SDB was estimated by detecting events of SpO2 oxygen desaturation. Furthermore, we classified sleep apnea/hypopnea events by applying different categorisations according to the severity of SDB based on a support vector machine. The classification results showed sensitivity performances and positivity predictive values of 74.2% and 87.5% for apnea, 87.5% and 63.4% for hypopnea, and 92.4% and 92.8% for apnea + hypopnea, respectively. These results represent better or comparable outcomes compared to those of previous studies. In addition, our classification method reliably detected sleep apnea/hypopnea events in all patient groups without bias in particular patient groups when our algorithm was applied to a variety of patient groups. Therefore, this method has the potential to diagnose SDB more reliably and conveniently using a pulse oximeter.

  11. Spacecraft Robustness to Orbital Debris: Guidelines & Recommendations

    NASA Astrophysics Data System (ADS)

    Heinrich, S.; Legloire, D.; Tromba, A.; Tholot, M.; Nold, O.

    2013-09-01

    The ever increasing number of orbital debris has already led the space community to implement guidelines and requirements for "cleaner" and "safer" space operations as non-debris generating missions and end of mission disposal in order to get preserved orbits rid of space junks. It is nowadays well-known that man-made orbital debris impacts are now a higher threat than natural micro-meteoroids and that recent events intentionally or accidentally generated so many new debris that may initiate a cascade chain effect known as "the Kessler Syndrome" potentially jeopardizing the useful orbits.The main recommendations on satellite design is to demonstrate an acceptable Probability of Non-Penetration (PNP) with regard to small population (<5cm) of MMOD (Micro-Meteoroids and Orbital Debris). Compliance implies to think about spacecraft robustness as redundancies, segregations and shielding devices (as implemented in crewed missions but in a more complex mass - cost - criticality trade- off). Consequently the need is non-only to demonstrate the PNP compliance requirement but also the PNF (probability of Non-Failure) per impact location on all parts of the vehicle and investigate the probabilities for the different fatal scenarios: loss of mission, loss of spacecraft (space environment critical) and spacecraft fragmentation (space environment catastrophic).The recent THALES experience known on ESA Sentinel-3, of increasing need of robustness has led the ALTRAN company to initiate an internal innovative working group on those topics which conclusions may be attractive for their prime manufacturer customers.The intention of this paper is to present a status of this study : * Regulations, requirements and tools available * Detailed FMECA studies dedicated specifically to the MMOD risks with the introduction of new of probability and criticality classification scales. * Examples of design risks assessment with regard to the specific MMOD impact risks. * Lessons learnt on robustness survivability of systems (materials, shieldings, rules) coming from other industrial domains (automotive, military vehicles) * Guidelines and Recommendations implementable on satellite systems and mechanical architecture.

  12. Unsupervised learning of discriminative edge measures for vehicle matching between nonoverlapping cameras.

    PubMed

    Shan, Ying; Sawhney, Harpreet S; Kumar, Rakesh

    2008-04-01

    This paper proposes a novel unsupervised algorithm learning discriminative features in the context of matching road vehicles between two non-overlapping cameras. The matching problem is formulated as a same-different classification problem, which aims to compute the probability of vehicle images from two distinct cameras being from the same vehicle or different vehicle(s). We employ a novel measurement vector that consists of three independent edge-based measures and their associated robust measures computed from a pair of aligned vehicle edge maps. The weight of each measure is determined by an unsupervised learning algorithm that optimally separates the same-different classes in the combined measurement space. This is achieved with a weak classification algorithm that automatically collects representative samples from same-different classes, followed by a more discriminative classifier based on Fisher' s Linear Discriminants and Gibbs Sampling. The robustness of the match measures and the use of unsupervised discriminant analysis in the classification ensures that the proposed method performs consistently in the presence of missing/false features, temporally and spatially changing illumination conditions, and systematic misalignment caused by different camera configurations. Extensive experiments based on real data of over 200 vehicles at different times of day demonstrate promising results.

  13. Update on the integrated histopathological and genetic classification of medulloblastoma – a practical diagnostic guideline

    PubMed Central

    Pietsch, Torsten; Haberler, Christine

    2016-01-01

    The revised WHO classification of tumors of the CNS 2016 has introduced the concept of the integrated diagnosis. The definition of medulloblastoma entities now requires a combination of the traditional histological information with additional molecular/genetic features. For definition of the histopathological component of the medulloblastoma diagnosis, the tumors should be assigned to one of the four entities classic, desmoplastic/nodular (DNMB), extensive nodular (MBEN), or large cell/anaplastic (LC/A) medulloblastoma. The genetically defined component comprises the four entities WNT-activated, SHH-activated and TP53 wildtype, SHH-activated and TP53 mutant, or non-WNT/non-SHH medulloblastoma. Robust and validated methods are available to allow a precise diagnosis of these medulloblastoma entities according to the updated WHO classification, and for differential diagnostic purposes. A combination of immunohistochemical markers including β-catenin, Yap1, p75-NGFR, Otx2, and p53, in combination with targeted sequencing and copy number assessment such as FISH analysis for MYC genes allows a precise assignment of patients for risk-adapted stratification. It also allows comparison to results of study cohorts in the past and provides a robust basis for further treatment refinement. PMID:27781424

  14. Update on the integrated histopathological and genetic classification of medulloblastoma - a practical diagnostic guideline.

    PubMed

    Pietsch, Torsten; Haberler, Christine

    The revised WHO classification of tumors of the CNS 2016 has introduced the concept of the integrated diagnosis. The definition of medulloblastoma entities now requires a combination of the traditional histological information with additional molecular/genetic features. For definition of the histopathological component of the medulloblastoma diagnosis, the tumors should be assigned to one of the four entities classic, desmoplastic/nodular (DNMB), extensive nodular (MBEN), or large cell/anaplastic (LC/A) medulloblastoma. The genetically defined component comprises the four entities WNT-activated, SHH-activated and TP53 wildtype, SHH-activated and TP53 mutant, or non-WNT/non-SHH medulloblastoma. Robust and validated methods are available to allow a precise diagnosis of these medulloblastoma entities according to the updated WHO classification, and for differential diagnostic purposes. A combination of immunohistochemical markers including β-catenin, Yap1, p75-NGFR, Otx2, and p53, in combination with targeted sequencing and copy number assessment such as FISH analysis for MYC genes allows a precise assignment of patients for risk-adapted stratification. It also allows comparison to results of study cohorts in the past and provides a robust basis for further treatment refinement.
.

  15. Designing a robust activity recognition framework for health and exergaming using wearable sensors.

    PubMed

    Alshurafa, Nabil; Xu, Wenyao; Liu, Jason J; Huang, Ming-Chun; Mortazavi, Bobak; Roberts, Christian K; Sarrafzadeh, Majid

    2014-09-01

    Detecting human activity independent of intensity is essential in many applications, primarily in calculating metabolic equivalent rates and extracting human context awareness. Many classifiers that train on an activity at a subset of intensity levels fail to recognize the same activity at other intensity levels. This demonstrates weakness in the underlying classification method. Training a classifier for an activity at every intensity level is also not practical. In this paper, we tackle a novel intensity-independent activity recognition problem where the class labels exhibit large variability, the data are of high dimensionality, and clustering algorithms are necessary. We propose a new robust stochastic approximation framework for enhanced classification of such data. Experiments are reported using two clustering techniques, K-Means and Gaussian Mixture Models. The stochastic approximation algorithm consistently outperforms other well-known classification schemes which validate the use of our proposed clustered data representation. We verify the motivation of our framework in two applications that benefit from intensity-independent activity recognition. The first application shows how our framework can be used to enhance energy expenditure calculations. The second application is a novel exergaming environment aimed at using games to reward physical activity performed throughout the day, to encourage a healthy lifestyle.

  16. Transfer Learning for Class Imbalance Problems with Inadequate Data.

    PubMed

    Al-Stouhi, Samir; Reddy, Chandan K

    2016-07-01

    A fundamental problem in data mining is to effectively build robust classifiers in the presence of skewed data distributions. Class imbalance classifiers are trained specifically for skewed distribution datasets. Existing methods assume an ample supply of training examples as a fundamental prerequisite for constructing an effective classifier. However, when sufficient data is not readily available, the development of a representative classification algorithm becomes even more difficult due to the unequal distribution between classes. We provide a unified framework that will potentially take advantage of auxiliary data using a transfer learning mechanism and simultaneously build a robust classifier to tackle this imbalance issue in the presence of few training samples in a particular target domain of interest. Transfer learning methods use auxiliary data to augment learning when training examples are not sufficient and in this paper we will develop a method that is optimized to simultaneously augment the training data and induce balance into skewed datasets. We propose a novel boosting based instance-transfer classifier with a label-dependent update mechanism that simultaneously compensates for class imbalance and incorporates samples from an auxiliary domain to improve classification. We provide theoretical and empirical validation of our method and apply to healthcare and text classification applications.

  17. Classification of Traffic Related Short Texts to Analyse Road Problems in Urban Areas

    NASA Astrophysics Data System (ADS)

    Saldana-Perez, A. M. M.; Moreno-Ibarra, M.; Tores-Ruiz, M.

    2017-09-01

    The Volunteer Geographic Information (VGI) can be used to understand the urban dynamics. In the classification of traffic related short texts to analyze road problems in urban areas, a VGI data analysis is done over a social media's publications, in order to classify traffic events at big cities that modify the movement of vehicles and people through the roads, such as car accidents, traffic and closures. The classification of traffic events described in short texts is done by applying a supervised machine learning algorithm. In the approach users are considered as sensors which describe their surroundings and provide their geographic position at the social network. The posts are treated by a text mining process and classified into five groups. Finally, the classified events are grouped in a data corpus and geo-visualized in the study area, to detect the places with more vehicular problems.

  18. The Australian experience in dental classification.

    PubMed

    Mahoney, Greg

    2008-01-01

    The Australian Defence Health Service uses a disease-risk management strategy to achieve two goals: first, to identify Australian Defence Force (ADF) members who are at high risk of developing an adverse health event, and second, to deliver intervention strategies efficiently so that maximum benefits for health within the ADF are achieved with the least cost. The present dental classification system utilized by the ADF, while an excellent dental triage tool, has been found not to be predictive of an ADF member having an adverse dental event in the following 12-month period. Clearly, there is a need for further research to establish a predictive risk-based dental classification system. This risk assessment must be sensitive enough to accurately estimate the probability that an ADF member will experience dental pain, dysfunction, or other adverse dental events within a forthcoming period, typically 12 months. Furthermore, there needs to be better epidemiological data collected in the field to assist in the research.

  19. TwitterSensing: An Event-Based Approach for Wireless Sensor Networks Optimization Exploiting Social Media in Smart City Applications

    PubMed Central

    2018-01-01

    Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter, and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events. PMID:29614060

  20. TwitterSensing: An Event-Based Approach for Wireless Sensor Networks Optimization Exploiting Social Media in Smart City Applications.

    PubMed

    Costa, Daniel G; Duran-Faundez, Cristian; Andrade, Daniel C; Rocha-Junior, João B; Peixoto, João Paulo Just

    2018-04-03

    Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter , and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events.

  1. Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines.

    PubMed

    Neftci, Emre O; Pedroni, Bruno U; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert

    2016-01-01

    Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware.

  2. Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

    PubMed Central

    Neftci, Emre O.; Pedroni, Bruno U.; Joshi, Siddharth; Al-Shedivat, Maruan; Cauwenberghs, Gert

    2016-01-01

    Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines (S2Ms), a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. S2Ms perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate and fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based S2Ms outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware. PMID:27445650

  3. Skyalert: a Platform for Event Understanding and Dissemination

    NASA Astrophysics Data System (ADS)

    Williams, Roy; Drake, A. J.; Djorgovski, S. G.; Donalek, C.; Graham, M. J.; Mahabal, A.

    2010-01-01

    Skyalert.org is an event repository, web interface, and event-oriented workflow architecture that can be used in many different ways for handling astronomical events that are encoded as VOEvent. It can be used as a remote application (events in the cloud) or installed locally. Some applications are: Dissemination of events with sophisticated discrimination (trigger), using email, instant message, RSS, twitter, etc; Authoring interface for survey-generated events, follow-up observations, and other event types; event streams can be put into the skyalert.org repository, either public or private, or into a local inbstallation of Skyalert; Event-driven software components to fetch archival data, for data-mining and classification of events; human interface to events though wiki, comments, and circulars; use of the "notices and circulars" model, where machines make the notices in real time and people write the interpretation later; Building trusted, automated decisions for automated follow-up observation, and the information infrastructure for automated follow-up with DC3 and HTN telescope schedulers; Citizen science projects such as artifact detection and classification; Query capability for past events, including correlations between different streams and correlations with existing source catalogs; Event metadata structures and connection to the global registry of the virtual observatory.

  4. Yarn-dyed fabric defect classification based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Jing, Junfeng; Dong, Amei; Li, Pengfei; Zhang, Kaibing

    2017-09-01

    Considering that manual inspection of the yarn-dyed fabric can be time consuming and inefficient, we propose a yarn-dyed fabric defect classification method by using a convolutional neural network (CNN) based on a modified AlexNet. CNN shows powerful ability in performing feature extraction and fusion by simulating the learning mechanism of human brain. The local response normalization layers in AlexNet are replaced by the batch normalization layers, which can enhance both the computational efficiency and classification accuracy. In the training process of the network, the characteristics of the defect are extracted step by step and the essential features of the image can be obtained from the fusion of the edge details with several convolution operations. Then the max-pooling layers, the dropout layers, and the fully connected layers are employed in the classification model to reduce the computation cost and extract more precise features of the defective fabric. Finally, the results of the defect classification are predicted by the softmax function. The experimental results show promising performance with an acceptable average classification rate and strong robustness on yarn-dyed fabric defect classification.

  5. Multi-spectral brain tissue segmentation using automatically trained k-Nearest-Neighbor classification.

    PubMed

    Vrooman, Henri A; Cocosco, Chris A; van der Lijn, Fedde; Stokking, Rik; Ikram, M Arfan; Vernooij, Meike W; Breteler, Monique M B; Niessen, Wiro J

    2007-08-01

    Conventional k-Nearest-Neighbor (kNN) classification, which has been successfully applied to classify brain tissue in MR data, requires training on manually labeled subjects. This manual labeling is a laborious and time-consuming procedure. In this work, a new fully automated brain tissue classification procedure is presented, in which kNN training is automated. This is achieved by non-rigidly registering the MR data with a tissue probability atlas to automatically select training samples, followed by a post-processing step to keep the most reliable samples. The accuracy of the new method was compared to rigid registration-based training and to conventional kNN-based segmentation using training on manually labeled subjects for segmenting gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) in 12 data sets. Furthermore, for all classification methods, the performance was assessed when varying the free parameters. Finally, the robustness of the fully automated procedure was evaluated on 59 subjects. The automated training method using non-rigid registration with a tissue probability atlas was significantly more accurate than rigid registration. For both automated training using non-rigid registration and for the manually trained kNN classifier, the difference with the manual labeling by observers was not significantly larger than inter-observer variability for all tissue types. From the robustness study, it was clear that, given an appropriate brain atlas and optimal parameters, our new fully automated, non-rigid registration-based method gives accurate and robust segmentation results. A similarity index was used for comparison with manually trained kNN. The similarity indices were 0.93, 0.92 and 0.92, for CSF, GM and WM, respectively. It can be concluded that our fully automated method using non-rigid registration may replace manual segmentation, and thus that automated brain tissue segmentation without laborious manual training is feasible.

  6. Certified normal: Alzheimer's disease biomarkers and normative estimates of cognitive functioning.

    PubMed

    Hassenstab, Jason; Chasse, Rachel; Grabow, Perri; Benzinger, Tammie L S; Fagan, Anne M; Xiong, Chengjie; Jasielec, Mateusz; Grant, Elizabeth; Morris, John C

    2016-07-01

    Normative samples drawn from older populations may unintentionally include individuals with preclinical Alzheimer's disease (AD) pathology, resulting in reduced means, increased variability, and overestimation of age effects on cognitive performance. A total of 264 cognitively normal (Clinical Dementia Rating = 0) older adults were classified as biomarker negative ("Robust Normal," n = 177) or biomarker positive ("Preclinical Alzheimer's Disease" [PCAD], n = 87) based on amyloid imaging, cerebrospinal fluid biomarkers, and hippocampal volumes. PCAD participants performed worse than robust normals on nearly all cognitive measures. Removing PCAD participants from the normative sample yielded higher means and less variability on episodic memory, visuospatial ability, and executive functioning measures. These results were more pronounced in participants aged 75 years and older. Notably, removing PCAD participants from the sample significantly reduced age effects across all cognitive domains. Applying norms from the robust normal sample to a separate cohort did not improve Clinical Dementia Rating classification when using standard deviation cutoff scores. Overall, removing individuals with biomarker evidence of preclinical AD improves normative sample quality and substantially reduces age effects on cognitive performance but provides no substantive benefit for diagnostic classifications. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. A robust sparse-modeling framework for estimating schizophrenia biomarkers from fMRI.

    PubMed

    Dillon, Keith; Calhoun, Vince; Wang, Yu-Ping

    2017-01-30

    Our goal is to identify the brain regions most relevant to mental illness using neuroimaging. State of the art machine learning methods commonly suffer from repeatability difficulties in this application, particularly when using large and heterogeneous populations for samples. We revisit both dimensionality reduction and sparse modeling, and recast them in a common optimization-based framework. This allows us to combine the benefits of both types of methods in an approach which we call unambiguous components. We use this to estimate the image component with a constrained variability, which is best correlated with the unknown disease mechanism. We apply the method to the estimation of neuroimaging biomarkers for schizophrenia, using task fMRI data from a large multi-site study. The proposed approach yields an improvement in both robustness of the estimate and classification accuracy. We find that unambiguous components incorporate roughly two thirds of the same brain regions as sparsity-based methods LASSO and elastic net, while roughly one third of the selected regions differ. Further, unambiguous components achieve superior classification accuracy in differentiating cases from controls. Unambiguous components provide a robust way to estimate important regions of imaging data. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Constrained Low-Rank Learning Using Least Squares-Based Regularization.

    PubMed

    Li, Ping; Yu, Jun; Wang, Meng; Zhang, Luming; Cai, Deng; Li, Xuelong

    2017-12-01

    Low-rank learning has attracted much attention recently due to its efficacy in a rich variety of real-world tasks, e.g., subspace segmentation and image categorization. Most low-rank methods are incapable of capturing low-dimensional subspace for supervised learning tasks, e.g., classification and regression. This paper aims to learn both the discriminant low-rank representation (LRR) and the robust projecting subspace in a supervised manner. To achieve this goal, we cast the problem into a constrained rank minimization framework by adopting the least squares regularization. Naturally, the data label structure tends to resemble that of the corresponding low-dimensional representation, which is derived from the robust subspace projection of clean data by low-rank learning. Moreover, the low-dimensional representation of original data can be paired with some informative structure by imposing an appropriate constraint, e.g., Laplacian regularizer. Therefore, we propose a novel constrained LRR method. The objective function is formulated as a constrained nuclear norm minimization problem, which can be solved by the inexact augmented Lagrange multiplier algorithm. Extensive experiments on image classification, human pose estimation, and robust face recovery have confirmed the superiority of our method.

  9. Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines

    PubMed Central

    Neftci, Emre O.; Augustine, Charles; Paul, Somnath; Detorakis, Georgios

    2017-01-01

    An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning. PMID:28680387

  10. Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines.

    PubMed

    Neftci, Emre O; Augustine, Charles; Paul, Somnath; Detorakis, Georgios

    2017-01-01

    An ongoing challenge in neuromorphic computing is to devise general and computationally efficient models of inference and learning which are compatible with the spatial and temporal constraints of the brain. One increasingly popular and successful approach is to take inspiration from inference and learning algorithms used in deep neural networks. However, the workhorse of deep learning, the gradient descent Gradient Back Propagation (BP) rule, often relies on the immediate availability of network-wide information stored with high-precision memory during learning, and precise operations that are difficult to realize in neuromorphic hardware. Remarkably, recent work showed that exact backpropagated gradients are not essential for learning deep representations. Building on these results, we demonstrate an event-driven random BP (eRBP) rule that uses an error-modulated synaptic plasticity for learning deep representations. Using a two-compartment Leaky Integrate & Fire (I&F) neuron, the rule requires only one addition and two comparisons for each synaptic weight, making it very suitable for implementation in digital or mixed-signal neuromorphic hardware. Our results show that using eRBP, deep representations are rapidly learned, achieving classification accuracies on permutation invariant datasets comparable to those obtained in artificial neural network simulations on GPUs, while being robust to neural and synaptic state quantizations during learning.

  11. Classifying Volcanic Activity Using an Empirical Decision Making Algorithm

    NASA Astrophysics Data System (ADS)

    Junek, W. N.; Jones, W. L.; Woods, M. T.

    2012-12-01

    Detection and classification of developing volcanic activity is vital to eruption forecasting. Timely information regarding an impending eruption would aid civil authorities in determining the proper response to a developing crisis. In this presentation, volcanic activity is characterized using an event tree classifier and a suite of empirical statistical models derived through logistic regression. Forecasts are reported in terms of the United States Geological Survey (USGS) volcano alert level system. The algorithm employs multidisciplinary data (e.g., seismic, GPS, InSAR) acquired by various volcano monitoring systems and source modeling information to forecast the likelihood that an eruption, with a volcanic explosivity index (VEI) > 1, will occur within a quantitatively constrained area. Logistic models are constructed from a sparse and geographically diverse dataset assembled from a collection of historic volcanic unrest episodes. Bootstrapping techniques are applied to the training data to allow for the estimation of robust logistic model coefficients. Cross validation produced a series of receiver operating characteristic (ROC) curves with areas ranging between 0.78-0.81, which indicates the algorithm has good predictive capabilities. The ROC curves also allowed for the determination of a false positive rate and optimum detection for each stage of the algorithm. Forecasts for historic volcanic unrest episodes in North America and Iceland were computed and are consistent with the actual outcome of the events.

  12. Identification of Terrestrial Reflectance From Remote Sensing

    NASA Technical Reports Server (NTRS)

    Alter-Gartenberg, Rachel; Nolf, Scott R.; Stacy, Kathryn (Technical Monitor)

    2000-01-01

    Correcting for atmospheric effects is an essential part of surface-reflectance recovery from radiance measurements. Model-based atmospheric correction techniques enable an accurate identification and classification of terrestrial reflectances from multi-spectral imagery. Successful and efficient removal of atmospheric effects from remote-sensing data is a key factor in the success of Earth observation missions. This report assesses the performance, robustness and sensitivity of two atmospheric-correction and reflectance-recovery techniques as part of an end-to-end simulation of hyper-spectral acquisition, identification and classification.

  13. Intelligible machine learning with malibu.

    PubMed

    Langlois, Robert E; Lu, Hui

    2008-01-01

    malibu is an open-source machine learning work-bench developed in C/C++ for high-performance real-world applications, namely bioinformatics and medical informatics. It leverages third-party machine learning implementations for more robust bug-free software. This workbench handles several well-studied supervised machine learning problems including classification, regression, importance-weighted classification and multiple-instance learning. The malibu interface was designed to create reproducible experiments ideally run in a remote and/or command line environment. The software can be found at: http://proteomics.bioengr. uic.edu/malibu/index.html.

  14. Immunophenotype Discovery, Hierarchical Organization, and Template-Based Classification of Flow Cytometry Samples

    DOE PAGES

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    2016-08-31

    We describe algorithms for discovering immunophenotypes from large collections of flow cytometry samples and using them to organize the samples into a hierarchy based on phenotypic similarity. The hierarchical organization is helpful for effective and robust cytometry data mining, including the creation of collections of cell populations’ characteristic of different classes of samples, robust classification, and anomaly detection. We summarize a set of samples belonging to a biological class or category with a statistically derived template for the class. Whereas individual samples are represented in terms of their cell populations (clusters), a template consists of generic meta-populations (a group ofmore » homogeneous cell populations obtained from the samples in a class) that describe key phenotypes shared among all those samples. We organize an FC data collection in a hierarchical data structure that supports the identification of immunophenotypes relevant to clinical diagnosis. A robust template-based classification scheme is also developed, but our primary focus is in the discovery of phenotypic signatures and inter-sample relationships in an FC data collection. This collective analysis approach is more efficient and robust since templates describe phenotypic signatures common to cell populations in several samples while ignoring noise and small sample-specific variations. We have applied the template-based scheme to analyze several datasets, including one representing a healthy immune system and one of acute myeloid leukemia (AML) samples. The last task is challenging due to the phenotypic heterogeneity of the several subtypes of AML. However, we identified thirteen immunophenotypes corresponding to subtypes of AML and were able to distinguish acute promyelocytic leukemia (APL) samples with the markers provided. Clinically, this is helpful since APL has a different treatment regimen from other subtypes of AML. Core algorithms used in our data analysis are available in the flowMatch package at www.bioconductor.org. It has been downloaded nearly 6,000 times since 2014.« less

  15. Immunophenotype Discovery, Hierarchical Organization, and Template-Based Classification of Flow Cytometry Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    We describe algorithms for discovering immunophenotypes from large collections of flow cytometry samples and using them to organize the samples into a hierarchy based on phenotypic similarity. The hierarchical organization is helpful for effective and robust cytometry data mining, including the creation of collections of cell populations’ characteristic of different classes of samples, robust classification, and anomaly detection. We summarize a set of samples belonging to a biological class or category with a statistically derived template for the class. Whereas individual samples are represented in terms of their cell populations (clusters), a template consists of generic meta-populations (a group ofmore » homogeneous cell populations obtained from the samples in a class) that describe key phenotypes shared among all those samples. We organize an FC data collection in a hierarchical data structure that supports the identification of immunophenotypes relevant to clinical diagnosis. A robust template-based classification scheme is also developed, but our primary focus is in the discovery of phenotypic signatures and inter-sample relationships in an FC data collection. This collective analysis approach is more efficient and robust since templates describe phenotypic signatures common to cell populations in several samples while ignoring noise and small sample-specific variations. We have applied the template-based scheme to analyze several datasets, including one representing a healthy immune system and one of acute myeloid leukemia (AML) samples. The last task is challenging due to the phenotypic heterogeneity of the several subtypes of AML. However, we identified thirteen immunophenotypes corresponding to subtypes of AML and were able to distinguish acute promyelocytic leukemia (APL) samples with the markers provided. Clinically, this is helpful since APL has a different treatment regimen from other subtypes of AML. Core algorithms used in our data analysis are available in the flowMatch package at www.bioconductor.org. It has been downloaded nearly 6,000 times since 2014.« less

  16. Distinguishing body mass and activity level from the lower limb: can entheses diagnose obesity?

    PubMed

    Godde, Kanya; Taylor, Rebecca Wilson

    2013-03-10

    The ability to estimate body size from the skeleton has broad applications, but is especially important to the forensic community when identifying unknown skeletal remains. This research investigates the utility of using entheses/muscle skeletal markers of the lower limb to estimate body size and to classify individuals into average, obese, and active categories, while using a biomechanical approach to interpret the results. Eighteen muscle attachment sites of the lower limb, known to be involved in the sit-to-stand transition, were scored for robusticity and stress in 105 white males (aged 31-81 years) from the William M. Bass Donated Skeletal Collection. Both logistic regression and log linear models were applied to the data to (1) test the utility of entheses as an indicator of body weight and activity level, and (2) to generate classification percentages that speak to the accuracy of the method. Thirteen robusticity scores differed significantly between the groups, but classification percentages were only slightly greater than chance. However, clear differences could be seen between the average and obese and the average and active groups. Stress scores showed no value in discriminating between groups. These results were interpreted in relation to biomechanical forces at the microscopic and macroscopic levels. Even though robusticity alone is not able to classify individuals well, its significance may show greater value when incorporated into a model that has multiple skeletal indicators. Further research needs to evaluate a larger sample and incorporate several lines of evidence to improve classification rates. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  17. Automatic Processing and Interpretation of Long Records of Endogenous Micro-Seismicity: the Case of the Super-Sauze Soft-Rock Landslide.

    NASA Astrophysics Data System (ADS)

    Provost, F.; Malet, J. P.; Hibert, C.; Doubre, C.

    2017-12-01

    The Super-Sauze landslide is a clay-rich landslide located the Southern French Alps. The landslide exhibits a complex pattern of deformation: a large number of rockfalls are observed in the 100 m height main scarp while the deformation of the upper part of the accumulated material is mainly affected by material shearing along stable in-situ crests. Several fissures are locally observed. The shallowest layer of the accumulated material tends to behave in a brittle manner but may undergo fluidization and/or rapid acceleration. Previous studies have demonstrated the presence of a rich endogenous micro-seismicity associated to the deformation of the landslide. However, the lack of long-term seismic records and suitable processing chains prevented a full interpretation of the links between the external forcings, the deformation and the recorded seismic signals. Since 2013, two permanent seismic arrays are installed in the upper part of the landslide. We here present the methodology adopted to process this dataset. The processing chain consists of a set of automated methods for automatic and robust detection, classification and location of the recorded seismicity. Thousands of events are detected and further automatically classified. The classification method is based on the description of the signal through attributes (e.g. waveform, spectral content properties). These attributes are used as inputs to classify the signal using a Random Forest machine-learning algorithm in four classes: endogenous micro-quakes, rockfalls, regional earthquakes and natural/anthropogenic noises. The endogenous landslide sources (i.e. micro-quake and rockfall) are further located. The location method is adapted to the type of event. The micro-quakes are located with a 3D velocity model derived from a seismic tomography campaign and an optimization of the first arrival picking with the inter-trace correlation of the P-wave arrivals. The rockfalls are located by optimizing the inter-trace correlation of the whole signal. We analyze the temporal relationships of the endogenous seismic events with rainfall and landslide displacements. Sub-families of landslide micro-quakes are also identified and an interpretation of their source mechanism is proposed from their signal properties and spatial location.

  18. Risk stratification personalised model for prediction of life-threatening ventricular tachyarrhythmias in patients with chronic heart failure.

    PubMed

    Frolov, Alexander Vladimirovich; Vaikhanskaya, Tatjana Gennadjevna; Melnikova, Olga Petrovna; Vorobiev, Anatoly Pavlovich; Guel, Ludmila Michajlovna

    2017-01-01

    The development of prognostic factors of life-threatening ventricular tachyarrhythmias (VTA) and sudden cardiac death (SCD) continues to maintain its priority and relevance in cardiology. The development of a method of personalised prognosis based on multifactorial analysis of the risk factors associated with life-threatening heart rhythm disturbances is considered a key research and clinical task. To design a prognostic and mathematical model to define personalised risk for life-threatening VTA in patients with chronic heart failure (CHF). The study included 240 patients with CHF (mean-age of 50.5 ± 12.1 years; left ventricular ejection fraction 32.8 ± 10.9%; follow-up period 36.8 ± 5.7 months). The participants received basic therapy for heart failure. The elec-trocardiogram (ECG) markers of myocardial electrical instability were assessed including microvolt T-wave alternans, heart rate turbulence, heart rate deceleration, and QT dispersion. Additionally, echocardiography and Holter monitoring (HM) were performed. The cardiovascular events were considered as primary endpoints, including SCD, paroxysmal ventricular tachycardia/ventricular fibrillation (VT/VF) based on HM-ECG data, and data obtained from implantable device interrogation (CRT-D, ICD) as well as appropriated shocks. During the follow-up period, 66 (27.5%) subjects with CHF showed adverse arrhythmic events, including nine SCD events and 57 VTAs. Data from a stepwise discriminant analysis of cumulative ECG-markers of myocardial electrical instability were used to make a mathematical model of preliminary VTA risk stratification. Uni- and multivariate Cox logistic regression analysis were performed to define an individualised risk stratification model of SCD/VTA. A binary logistic regression model demonstrated a high prognostic significance of discriminant function with a classification sensitivity of 80.8% and specificity of 99.1% (F = 31.2; c2 = 143.2; p < 0.0001). The method of personalised risk stratification using Cox logistic regression allows correct classification of more than 93.9% of CHF cases. A robust body of evidence concerning logistic regression prognostic significance to define VTA risk allows inclusion of this method into the algorithm of subsequent control and selection of the optimal treatment modality to treat patients with CHF.

  19. Intelligent Interoperable Agent Toolkit (I2AT)

    DTIC Science & Technology

    2005-02-01

    Agents, Agent Infrastructure, Intelligent Agents 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT UNCLASSIFIED 18. SECURITY ...CLASSIFICATION OF THIS PAGE UNCLASSIFIED 19. SECURITY CLASSIFICATION OF ABSTRACT UNCLASSIFIED 20. LIMITATION OF ABSTRACT UL NSN 7540-01...those that occur while the submarine is submerged. Using CoABS Grid/Jini service discovery events backed up with a small amount of internal bookkeeping

  20. Adverse events following cervical manipulative therapy: consensus on classification among Dutch medical specialists, manual therapists, and patients.

    PubMed

    Kranenburg, Hendrikus A; Lakke, Sandra E; Schmitt, Maarten A; Van der Schans, Cees P

    2017-12-01

    To obtain consensus-based agreement on a classification system of adverse events (AE) following cervical spinal manipulation. The classification system should be comprised of clear definitions, include patients' and clinicians' perspectives, and have an acceptable number of categories. Design : A three-round Delphi study. Participants : Thirty Dutch participants (medical specialists, manual therapists, and patients) participated in an online survey. Procedure : Participants inventoried AE and were asked about their preferences for either a three- or a four-category classification system. The identified AE were classified by two analysts following the International Classification of Functioning, Disability and Health (ICF), and the International Classification of Diseases and Related Health Problems (ICD-10). Participants were asked to classify the severity for all AE in relation to the time duration. Consensus occurred in a three-category classification system. There was strong consensus for 16 AE in all severities (no, minor, and major AE) and all three time durations [hours, days, weeks]. The 16 AE included anxiety, flushing, skin rash, fainting, dizziness, coma, altered sensation, muscle tenderness, pain, increased pain during movement, radiating pain, dislocation, fracture, transient ischemic attack, stroke, and death. Mild to strong consensus was reached for 13 AE. A consensus-based classification system of AE is established which includes patients' and clinicians' perspectives and has three categories. The classification comprises a precise description of potential AE in accordance with internationally accepted classifications. After international validation, clinicians and researchers may use this AE classification system to report AE in clinical practice and research.

  1. Video Traffic Analysis for Abnormal Event Detection

    DOT National Transportation Integrated Search

    2010-01-01

    We propose the use of video imaging sensors for the detection and classification of abnormal events to be used primarily for mitigation of traffic congestion. Successful detection of such events will allow for new road guidelines; for rapid deploymen...

  2. Video traffic analysis for abnormal event detection.

    DOT National Transportation Integrated Search

    2010-01-01

    We propose the use of video imaging sensors for the detection and classification of abnormal events to : be used primarily for mitigation of traffic congestion. Successful detection of such events will allow for : new road guidelines; for rapid deplo...

  3. Spatial Mutual Information Based Hyperspectral Band Selection for Classification

    PubMed Central

    2015-01-01

    The amount of information involved in hyperspectral imaging is large. Hyperspectral band selection is a popular method for reducing dimensionality. Several information based measures such as mutual information have been proposed to reduce information redundancy among spectral bands. Unfortunately, mutual information does not take into account the spatial dependency between adjacent pixels in images thus reducing its robustness as a similarity measure. In this paper, we propose a new band selection method based on spatial mutual information. As validation criteria, a supervised classification method using support vector machine (SVM) is used. Experimental results of the classification of hyperspectral datasets show that the proposed method can achieve more accurate results. PMID:25918742

  4. Logistic regression applied to natural hazards: rare event logistic regression with replications

    NASA Astrophysics Data System (ADS)

    Guns, M.; Vanacker, V.

    2012-06-01

    Statistical analysis of natural hazards needs particular attention, as most of these phenomena are rare events. This study shows that the ordinary rare event logistic regression, as it is now commonly used in geomorphologic studies, does not always lead to a robust detection of controlling factors, as the results can be strongly sample-dependent. In this paper, we introduce some concepts of Monte Carlo simulations in rare event logistic regression. This technique, so-called rare event logistic regression with replications, combines the strength of probabilistic and statistical methods, and allows overcoming some of the limitations of previous developments through robust variable selection. This technique was here developed for the analyses of landslide controlling factors, but the concept is widely applicable for statistical analyses of natural hazards.

  5. 5 CFR 1312.8 - Standard identification and markings.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... CLASSIFICATION, DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and Declassification of National Security Information § 1312.8 Standard identification and markings... or event for declassification that corresponds to the lapse of the information's national security...

  6. 5 CFR 1312.8 - Standard identification and markings.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... CLASSIFICATION, DOWNGRADING, DECLASSIFICATION AND SAFEGUARDING OF NATIONAL SECURITY INFORMATION Classification and Declassification of National Security Information § 1312.8 Standard identification and markings... or event for declassification that corresponds to the lapse of the information's national security...

  7. 49 CFR 806.3 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... designated. One of the following classifications will be shown: (1) Top secret means information, the... expected to cause serious damage to national security. (3) Confidential means information, the unauthorized... an event which would eliminate the need for continued classification. ...

  8. 49 CFR 806.3 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... designated. One of the following classifications will be shown: (1) Top secret means information, the... expected to cause serious damage to national security. (3) Confidential means information, the unauthorized... an event which would eliminate the need for continued classification. ...

  9. 49 CFR 806.3 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... designated. One of the following classifications will be shown: (1) Top secret means information, the... expected to cause serious damage to national security. (3) Confidential means information, the unauthorized... an event which would eliminate the need for continued classification. ...

  10. 49 CFR 806.3 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... designated. One of the following classifications will be shown: (1) Top secret means information, the... expected to cause serious damage to national security. (3) Confidential means information, the unauthorized... an event which would eliminate the need for continued classification. ...

  11. Recognition and characterization of unstructured environmental sounds

    NASA Astrophysics Data System (ADS)

    Chu, Selina

    2011-12-01

    Environmental sounds are what we hear everyday, or more generally sounds that surround us ambient or background audio. Humans utilize both vision and hearing to respond to their surroundings, a capability still quite limited in machine processing. The first step toward achieving multimodal input applications is the ability to process unstructured audio and recognize audio scenes (or environments). Such ability would have applications in content analysis and mining of multimedia data or improving robustness in context aware applications through multi-modality, such as in assistive robotics, surveillances, or mobile device-based services. The goal of this thesis is on the characterization of unstructured environmental sounds for understanding and predicting the context surrounding of an agent or device. Most research on audio recognition has focused primarily on speech and music. Less attention has been paid to the challenges and opportunities for using audio to characterize unstructured audio. My research focuses on investigating challenging issues in characterizing unstructured environmental audio and to develop novel algorithms for modeling the variations of the environment. The first step in building a recognition system for unstructured auditory environment was to investigate on techniques and audio features for working with such audio data. We begin by performing a study that explore suitable features and the feasibility of designing an automatic environment recognition system using audio information. In my initial investigation to explore the feasibility of designing an automatic environment recognition system using audio information, I have found that traditional recognition and feature extraction for audio were not suitable for environmental sound, as they lack any type of structures, unlike those of speech and music which contain formantic and harmonic structures, thus dispelling the notion that traditional speech and music recognition techniques can simply be used for realistic environmental sound. Natural unstructured environment sounds contain a large variety of sounds, which are in fact noise-like and are not effectively modeled by Mel-frequency cepstral coefficients (MFCCs) or other commonly-used audio features, e.g. energy, zero-crossing, etc. Due to the lack of appropriate features that is suitable for environmental audio and to achieve a more effective representation, I proposed a specialized feature extraction algorithm for environmental sounds that utilizes the matching pursuit (MP) algorithm to learn the inherent structure of each type of sounds, which we called MP-features. MP-features have shown to capture and represent sounds from different sources and different ranges, where frequency domain features (e.g., MFCCs) fail and can be advantageous when combining with MFCCs to improve the overall performance. The third component leads to our investigation on modeling and detecting the background audio. One of the goals of this research is to characterize an environment. Since many events would blend into the background, I wanted to look for a way to achieve a general model for any particular environment. Once we have an idea of the background, it will enable us to identify foreground events even if we havent seen these events before. Therefore, the next step is to investigate into learning the audio background model for each environment type, despite the occurrences of different foreground events. In this work, I presented a framework for robust audio background modeling, which includes learning the models for prediction, data knowledge and persistent characteristics of the environment. This approach has the ability to model the background and detect foreground events as well as the ability to verify whether the predicted background is indeed the background or a foreground event that protracts for a longer period of time. In this work, I also investigated the use of a semi-supervised learning technique to exploit and label new unlabeled audio data. The final components of my thesis will involve investigating on learning sound structures for generalization and applying the proposed ideas to context aware applications. The inherent nature of environmental sound is noisy and contains relatively large amounts of overlapping events between different environments. Environmental sounds contain large variances even within a single environment type, and frequently, there are no divisible or clear boundaries between some types. Traditional methods of classification are generally not robust enough to handle classes with overlaps. This audio, hence, requires representation by complex models. Using deep learning architecture provides a way to obtain a generative model-based method for classification. Specifically, I considered the use of Deep Belief Networks (DBNs) to model environmental audio and investigate its applicability with noisy data to improve robustness and generalization. A framework was proposed using composite-DBNs to discover high-level representations and to learn a hierarchical structure for different acoustic environments in a data-driven fashion. Experimental results on real data sets demonstrate its effectiveness over traditional methods with over 90% accuracy on recognition for a high number of environmental sound types.

  12. GA(M)E-QSAR: a novel, fully automatic genetic-algorithm-(meta)-ensembles approach for binary classification in ligand-based drug design.

    PubMed

    Pérez-Castillo, Yunierkis; Lazar, Cosmin; Taminau, Jonatan; Froeyen, Mathy; Cabrera-Pérez, Miguel Ángel; Nowé, Ann

    2012-09-24

    Computer-aided drug design has become an important component of the drug discovery process. Despite the advances in this field, there is not a unique modeling approach that can be successfully applied to solve the whole range of problems faced during QSAR modeling. Feature selection and ensemble modeling are active areas of research in ligand-based drug design. Here we introduce the GA(M)E-QSAR algorithm that combines the search and optimization capabilities of Genetic Algorithms with the simplicity of the Adaboost ensemble-based classification algorithm to solve binary classification problems. We also explore the usefulness of Meta-Ensembles trained with Adaboost and Voting schemes to further improve the accuracy, generalization, and robustness of the optimal Adaboost Single Ensemble derived from the Genetic Algorithm optimization. We evaluated the performance of our algorithm using five data sets from the literature and found that it is capable of yielding similar or better classification results to what has been reported for these data sets with a higher enrichment of active compounds relative to the whole actives subset when only the most active chemicals are considered. More important, we compared our methodology with state of the art feature selection and classification approaches and found that it can provide highly accurate, robust, and generalizable models. In the case of the Adaboost Ensembles derived from the Genetic Algorithm search, the final models are quite simple since they consist of a weighted sum of the output of single feature classifiers. Furthermore, the Adaboost scores can be used as ranking criterion to prioritize chemicals for synthesis and biological evaluation after virtual screening experiments.

  13. Comparison of seven protocols to identify fecal contamination sources using Escherichia coli

    USGS Publications Warehouse

    Stoeckel, D.M.; Mathes, M.V.; Hyer, K.E.; Hagedorn, C.; Kator, H.; Lukasik, J.; O'Brien, T. L.; Fenger, T.W.; Samadpour, M.; Strickler, K.M.; Wiggins, B.A.

    2004-01-01

    Microbial source tracking (MST) uses various approaches to classify fecal-indicator microorganisms to source hosts. Reproducibility, accuracy, and robustness of seven phenotypic and genotypic MST protocols were evaluated by use of Escherichia coli from an eight-host library of known-source isolates and a separate, blinded challenge library. In reproducibility tests, measuring each protocol's ability to reclassify blinded replicates, only one (pulsed-field gel electrophoresis; PFGE) correctly classified all test replicates to host species; three protocols classified 48-62% correctly, and the remaining three classified fewer than 25% correctly. In accuracy tests, measuring each protocol's ability to correctly classify new isolates, ribotyping with EcoRI and PvuII approached 100% correct classification but only 6% of isolates were classified; four of the other six protocols (antibiotic resistance analysis, PFGE, and two repetitive-element PCR protocols) achieved better than random accuracy rates when 30-100% of challenge isolates were classified. In robustness tests, measuring each protocol's ability to recognize isolates from nonlibrary hosts, three protocols correctly classified 33-100% of isolates as "unknown origin," whereas four protocols classified all isolates to a source category. A relevance test, summarizing interpretations for a hypothetical water sample containing 30 challenge isolates, indicated that false-positive classifications would hinder interpretations for most protocols. Study results indicate that more representation in known-source libraries and better classification accuracy would be needed before field application. Thorough reliability assessment of classification results is crucial before and during application of MST protocols.

  14. Topological data analyses and machine learning for detection, classification and characterization of atmospheric rivers

    NASA Astrophysics Data System (ADS)

    Muszynski, G.; Kashinath, K.; Wehner, M. F.; Prabhat, M.; Kurlin, V.

    2017-12-01

    We investigate novel approaches to detecting, classifying and characterizing extreme weather events, such as atmospheric rivers (ARs), in large high-dimensional climate datasets. ARs are narrow filaments of concentrated water vapour in the atmosphere that bring much of the precipitation in many mid-latitude regions. The precipitation associated with ARs is also responsible for major flooding events in many coastal regions of the world, including the west coast of the United States and western Europe. In this study we combine ideas from Topological Data Analysis (TDA) with Machine Learning (ML) for detecting, classifying and characterizing extreme weather events, like ARs. TDA is a new field that sits at the interface between topology and computer science, that studies "shape" - hidden topological structure - in raw data. It has been applied successfully in many areas of applied sciences, including complex networks, signal processing and image recognition. Using TDA we provide ARs with a shape characteristic as a new feature descriptor for the task of AR classification. In particular, we track the change in topology in precipitable water (integrated water vapour) fields using the Union-Find algorithm. We use the generated feature descriptors with ML classifiers to establish reliability and classification performance of our approach. We utilize the parallel toolkit for extreme climate events analysis (TECA: Petascale Pattern Recognition for Climate Science, Prabhat et al., Computer Analysis of Images and Patterns, 2015) for comparison (it is assumed that events identified by TECA is ground truth). Preliminary results indicate that our approach brings new insight into the study of ARs and provides quantitative information about the relevance of topological feature descriptors in analyses of a large climate datasets. We illustrate this method on climate model output and NCEP reanalysis datasets. Further, our method outperforms existing methods on detection and classification of ARs. This work illustrates that TDA combined with ML may provide a uniquely powerful approach for detection, classification and characterization of extreme weather phenomena.

  15. An objective and parsimonious approach for classifying natural flow regimes at a continental scale

    USGS Publications Warehouse

    Archfield, Stacey A.; Kennen, Jonathan G.; Carlisle, Daren M.; Wolock, David M.

    2014-01-01

    Hydro-ecological stream classification-the process of grouping streams by similar hydrologic responses and, by extension, similar aquatic habitat-has been widely accepted and is considered by some to be one of the first steps towards developing ecological flow targets. A new classification of 1543 streamgauges in the contiguous USA is presented by use of a novel and parsimonious approach to understand similarity in ecological streamflow response. This novel classification approach uses seven fundamental daily streamflow statistics (FDSS) rather than winnowing down an uncorrelated subset from 200 or more ecologically relevant streamflow statistics (ERSS) commonly used in hydro-ecological classification studies. The results of this investigation demonstrate that the distributions of 33 tested ERSS are consistently different among the classification groups derived from the seven FDSS. It is further shown that classification based solely on the 33 ERSS generally does a poorer job in grouping similar streamgauges than the classification based on the seven FDSS. This new classification approach has the additional advantages of overcoming some of the subjectivity associated with the selection of the classification variables and provides a set of robust continental-scale classes of US streamgauges. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.

  16. A Classification of Mediterranean Cyclones Based on Global Analyses

    NASA Technical Reports Server (NTRS)

    Reale, Oreste; Atlas, Robert

    2003-01-01

    The Mediterranean Sea region is dominated by baroclinic and orographic cyclogenesis. However, previous work has demonstrated the existence of rare but intense subsynoptic-scale cyclones displaying remarkable similarities to tropical cyclones and polar lows, including, but not limited to, an eye-like feature in the satellite imagery. The terms polar low and tropical cyclone have been often used interchangeably when referring to small-scale, convective Mediterranean vortices and no definitive statement has been made so far on their nature, be it sub-tropical or polar. Moreover, most of the classifications of Mediterranean cyclones have neglected the small-scale convective vortices, focusing only on the larger-scale and far more common baroclinic cyclones. A classification of all Mediterranean cyclones based on operational global analyses is proposed The classification is based on normalized horizontal shear, vertical shear, scale, low versus mid-level vorticity, low-level temperature gradients, and sea surface temperatures. In the classification system there is a continuum of possible events, according to the increasing role of barotropic instability and decreasing role of baroclinic instability. One of the main results is that the Mediterranean tropical cyclone-like vortices and the Mediterranean polar lows appear to be different types of events, in spite of the apparent similarity of their satellite imagery. A consistent terminology is adopted, stating that tropical cyclone- like vortices are the less baroclinic of all, followed by polar lows, cold small-scale cyclones and finally baroclinic lee cyclones. This classification is based on all the cyclones which occurred in a four-year period (between 1996 and 1999). Four cyclones, selected among all the ones which developed during this time-frame, are analyzed. Particularly, the classification allows to discriminate between two cyclones (occurred in October 1996 and in March 1999) which both display a very well-defined eye-like feature in the satellite imagery. According to our classification system, the two events are dynamically different and can be categorized as being respectively a tropical cyclone-like vortex and well-developed polar low.

  17. Proposal of a New Adverse Event Classification by the Society of Interventional Radiology Standards of Practice Committee.

    PubMed

    Khalilzadeh, Omid; Baerlocher, Mark O; Shyn, Paul B; Connolly, Bairbre L; Devane, A Michael; Morris, Christopher S; Cohen, Alan M; Midia, Mehran; Thornton, Raymond H; Gross, Kathleen; Caplin, Drew M; Aeron, Gunjan; Misra, Sanjay; Patel, Nilesh H; Walker, T Gregory; Martinez-Salazar, Gloria; Silberzweig, James E; Nikolic, Boris

    2017-10-01

    To develop a new adverse event (AE) classification for the interventional radiology (IR) procedures and evaluate its clinical, research, and educational value compared with the existing Society of Interventional Radiology (SIR) classification via an SIR member survey. A new AE classification was developed by members of the Standards of Practice Committee of the SIR. Subsequently, a survey was created by a group of 18 members from the SIR Standards of Practice Committee and Service Lines. Twelve clinical AE case scenarios were generated that encompassed a broad spectrum of IR procedures and potential AEs. Survey questions were designed to evaluate the following domains: educational and research values, accountability for intraprocedural challenges, consistency of AE reporting, unambiguity, and potential for incorporation into existing quality-assurance framework. For each AE scenario, the survey participants were instructed to answer questions about the proposed and existing SIR classifications. SIR members were invited via online survey links, and 68 members participated among 140 surveyed. Answers on new and existing classifications were evaluated and compared statistically. Overall comparison between the two surveys was performed by generalized linear modeling. The proposed AE classification received superior evaluations in terms of consistency of reporting (P < .05) and potential for incorporation into existing quality-assurance framework (P < .05). Respondents gave a higher overall rating to the educational and research value of the new compared with the existing classification (P < .05). This study proposed an AE classification system that outperformed the existing SIR classification in the studied domains. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.

  18. Evaluating multimedia chemical persistence: Classification and regression tree analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, D.H.; McKone, T.E.; Kastenberg, W.E.

    2000-04-01

    For the thousands of chemicals continuously released into the environment, it is desirable to make prospective assessments of those likely to be persistent. Widely distributed persistent chemicals are impossible to remove from the environment and remediation by natural processes may take decades, which is problematic if adverse health or ecological effects are discovered after prolonged release into the environment. A tiered approach using a classification scheme and a multimedia model for determining persistence is presented. Using specific criteria for persistence, a classification tree is developed to classify a chemical as persistent or nonpersistent based on the chemical properties. In thismore » approach, the classification is derived from the results of a standardized unit world multimedia model. Thus, the classifications are more robust for multimedia pollutants than classifications using a single medium half-life. The method can be readily implemented and provides insight without requiring extensive and often unavailable data. This method can be used to classify chemicals when only a few properties are known and can be used to direct further data collection. Case studies are presented to demonstrate the advantages of the approach.« less

  19. Using genetically modified tomato crop plants with purple leaves for absolute weed/crop classification.

    PubMed

    Lati, Ran N; Filin, Sagi; Aly, Radi; Lande, Tal; Levin, Ilan; Eizenberg, Hanan

    2014-07-01

    Weed/crop classification is considered the main problem in developing precise weed-management methodologies, because both crops and weeds share similar hues. Great effort has been invested in the development of classification models, most based on expensive sensors and complicated algorithms. However, satisfactory results are not consistently obtained due to imaging conditions in the field. We report on an innovative approach that combines advances in genetic engineering and robust image-processing methods to detect weeds and distinguish them from crop plants by manipulating the crop's leaf color. We demonstrate this on genetically modified tomato (germplasm AN-113) which expresses a purple leaf color. An autonomous weed/crop classification is performed using an invariant-hue transformation that is applied to images acquired by a standard consumer camera (visible wavelength) and handles variations in illumination intensities. The integration of these methodologies is simple and effective, and classification results were accurate and stable under a wide range of imaging conditions. Using this approach, we simplify the most complicated stage in image-based weed/crop classification models. © 2013 Society of Chemical Industry.

  20. Quantum Cascade Laser-Based Infrared Microscopy for Label-Free and Automated Cancer Classification in Tissue Sections.

    PubMed

    Kuepper, Claus; Kallenbach-Thieltges, Angela; Juette, Hendrik; Tannapfel, Andrea; Großerueschkamp, Frederik; Gerwert, Klaus

    2018-05-16

    A feasibility study using a quantum cascade laser-based infrared microscope for the rapid and label-free classification of colorectal cancer tissues is presented. Infrared imaging is a reliable, robust, automated, and operator-independent tissue classification method that has been used for differential classification of tissue thin sections identifying tumorous regions. However, long acquisition time by the so far used FT-IR-based microscopes hampered the clinical translation of this technique. Here, the used quantum cascade laser-based microscope provides now infrared images for precise tissue classification within few minutes. We analyzed 110 patients with UICC-Stage II and III colorectal cancer, showing 96% sensitivity and 100% specificity of this label-free method as compared to histopathology, the gold standard in routine clinical diagnostics. The main hurdle for the clinical translation of IR-Imaging is overcome now by the short acquisition time for high quality diagnostic images, which is in the same time range as frozen sections by pathologists.

  1. Adaptive phase k-means algorithm for waveform classification

    NASA Astrophysics Data System (ADS)

    Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin

    2018-01-01

    Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.

  2. Pattern Recognition of Momentary Mental Workload Based on Multi-Channel Electrophysiological Data and Ensemble Convolutional Neural Networks.

    PubMed

    Zhang, Jianhua; Li, Sunan; Wang, Rubin

    2017-01-01

    In this paper, we deal with the Mental Workload (MWL) classification problem based on the measured physiological data. First we discussed the optimal depth (i.e., the number of hidden layers) and parameter optimization algorithms for the Convolutional Neural Networks (CNN). The base CNNs designed were tested according to five classification performance indices, namely Accuracy, Precision, F-measure, G-mean, and required training time. Then we developed an Ensemble Convolutional Neural Network (ECNN) to enhance the accuracy and robustness of the individual CNN model. For the ECNN design, three model aggregation approaches (weighted averaging, majority voting and stacking) were examined and a resampling strategy was used to enhance the diversity of individual CNN models. The results of MWL classification performance comparison indicated that the proposed ECNN framework can effectively improve MWL classification performance and is featured by entirely automatic feature extraction and MWL classification, when compared with traditional machine learning methods.

  3. Training echo state networks for rotation-invariant bone marrow cell classification.

    PubMed

    Kainz, Philipp; Burgsteiner, Harald; Asslaber, Martin; Ahammer, Helmut

    2017-01-01

    The main principle of diagnostic pathology is the reliable interpretation of individual cells in context of the tissue architecture. Especially a confident examination of bone marrow specimen is dependent on a valid classification of myeloid cells. In this work, we propose a novel rotation-invariant learning scheme for multi-class echo state networks (ESNs), which achieves very high performance in automated bone marrow cell classification. Based on representing static images as temporal sequence of rotations, we show how ESNs robustly recognize cells of arbitrary rotations by taking advantage of their short-term memory capacity. The performance of our approach is compared to a classification random forest that learns rotation-invariance in a conventional way by exhaustively training on multiple rotations of individual samples. The methods were evaluated on a human bone marrow image database consisting of granulopoietic and erythropoietic cells in different maturation stages. Our ESN approach to cell classification does not rely on segmentation of cells or manual feature extraction and can therefore directly be applied to image data.

  4. Performance of fusion algorithms for computer-aided detection and classification of mines in very shallow water obtained from testing in navy Fleet Battle Exercise-Hotel 2000

    NASA Astrophysics Data System (ADS)

    Ciany, Charles M.; Zurawski, William; Kerfoot, Ian

    2001-10-01

    The performance of Computer Aided Detection/Computer Aided Classification (CAD/CAC) Fusion algorithms on side-scan sonar images was evaluated using data taken at the Navy's's Fleet Battle Exercise-Hotel held in Panama City, Florida, in August 2000. A 2-of-3 binary fusion algorithm is shown to provide robust performance. The algorithm accepts the classification decisions and associated contact locations form three different CAD/CAC algorithms, clusters the contacts based on Euclidian distance, and then declares a valid target when a clustered contact is declared by at least 2 of the 3 individual algorithms. This simple binary fusion provided a 96 percent probability of correct classification at a false alarm rate of 0.14 false alarms per image per side. The performance represented a 3.8:1 reduction in false alarms over the best performing single CAD/CAC algorithm, with no loss in probability of correct classification.

  5. Towards a robust framework for catchment classification

    NASA Astrophysics Data System (ADS)

    Deshmukh, A.; Samal, A.; Singh, R.

    2017-12-01

    Classification of catchments based on various measures of similarity has emerged as an important technique to understand regional scale hydrologic behavior. Classification of catchment characteristics and/or streamflow response has been used reveal which characteristics are more likely to explain the observed variability of hydrologic response. However, numerous algorithms for supervised or unsupervised classification are available, making it hard to identify the algorithm most suitable for the dataset at hand. Consequently, existing catchment classification studies vary significantly in the classification algorithms employed with no previous attempt at understanding the degree of uncertainty in classification due to this algorithmic choice. This hinders the generalizability of interpretations related to hydrologic behavior. Our goal is to develop a protocol that can be followed while classifying hydrologic datasets. We focus on a classification framework for unsupervised classification and provide a step-by-step classification procedure. The steps include testing the clusterabiltiy of original dataset prior to classification, feature selection, validation of clustered data, and quantification of similarity of two clusterings. We test several commonly available methods within this framework to understand the level of similarity of classification results across algorithms. We apply the proposed framework on recently developed datasets for India to analyze to what extent catchment properties can explain observed catchment response. Our testing dataset includes watershed characteristics for over 200 watersheds which comprise of both natural (physio-climatic) characteristics and socio-economic characteristics. This framework allows us to understand the controls on observed hydrologic variability across India.

  6. Neural network classification of questionable EGRET events

    NASA Astrophysics Data System (ADS)

    Meetre, C. A.; Norris, J. P.

    1992-02-01

    High energy gamma rays (greater than 20 MeV) pair producing in the spark chamber of the Energetic Gamma Ray Telescope Experiment (EGRET) give rise to a characteristic but highly variable 3-D locus of spark sites, which must be processed to decide whether the event is to be included in the database. A significant fraction (about 15 percent or 104 events/day) of the candidate events cannot be categorized (accept/reject) by an automated rule-based procedure; they are therefore tagged, and must be examined and classified manually by a team of expert analysts. We describe a feedforward, back-propagation neural network approach to the classification of the questionable events. The algorithm computes a set of coefficients using representative exemplars drawn from the preclassified set of questionable events. These coefficients map a given input event into a decision vector that, ideally, describes the correct disposition of the event. The net's accuracy is then tested using a different subset of preclassified events. Preliminary results demonstrate the net's ability to correctly classify a large proportion of the events for some categories of questionables. Current work includes the use of much larger training sets to improve the accuracy of the net.

  7. Neural network classification of questionable EGRET events

    NASA Technical Reports Server (NTRS)

    Meetre, C. A.; Norris, J. P.

    1992-01-01

    High energy gamma rays (greater than 20 MeV) pair producing in the spark chamber of the Energetic Gamma Ray Telescope Experiment (EGRET) give rise to a characteristic but highly variable 3-D locus of spark sites, which must be processed to decide whether the event is to be included in the database. A significant fraction (about 15 percent or 10(exp 4) events/day) of the candidate events cannot be categorized (accept/reject) by an automated rule-based procedure; they are therefore tagged, and must be examined and classified manually by a team of expert analysts. We describe a feedforward, back-propagation neural network approach to the classification of the questionable events. The algorithm computes a set of coefficients using representative exemplars drawn from the preclassified set of questionable events. These coefficients map a given input event into a decision vector that, ideally, describes the correct disposition of the event. The net's accuracy is then tested using a different subset of preclassified events. Preliminary results demonstrate the net's ability to correctly classify a large proportion of the events for some categories of questionables. Current work includes the use of much larger training sets to improve the accuracy of the net.

  8. Automatic optical detection and classification of marine animals around MHK converters using machine vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunton, Steven

    Optical systems provide valuable information for evaluating interactions and associations between organisms and MHK energy converters and for capturing potentially rare encounters between marine organisms and MHK device. The deluge of optical data from cabled monitoring packages makes expert review time-consuming and expensive. We propose algorithms and a processing framework to automatically extract events of interest from underwater video. The open-source software framework consists of background subtraction, filtering, feature extraction and hierarchical classification algorithms. This principle classification pipeline was validated on real-world data collected with an experimental underwater monitoring package. An event detection rate of 100% was achieved using robustmore » principal components analysis (RPCA), Fourier feature extraction and a support vector machine (SVM) binary classifier. The detected events were then further classified into more complex classes – algae | invertebrate | vertebrate, one species | multiple species of fish, and interest rank. Greater than 80% accuracy was achieved using a combination of machine learning techniques.« less

  9. Classification of speech dysfluencies using LPC based parameterization techniques.

    PubMed

    Hariharan, M; Chee, Lim Sin; Ai, Ooi Chia; Yaacob, Sazali

    2012-06-01

    The goal of this paper is to discuss and compare three feature extraction methods: Linear Predictive Coefficients (LPC), Linear Prediction Cepstral Coefficients (LPCC) and Weighted Linear Prediction Cepstral Coefficients (WLPCC) for recognizing the stuttered events. Speech samples from the University College London Archive of Stuttered Speech (UCLASS) were used for our analysis. The stuttered events were identified through manual segmentation and were used for feature extraction. Two simple classifiers namely, k-nearest neighbour (kNN) and Linear Discriminant Analysis (LDA) were employed for speech dysfluencies classification. Conventional validation method was used for testing the reliability of the classifier results. The study on the effect of different frame length, percentage of overlapping, value of ã in a first order pre-emphasizer and different order p were discussed. The speech dysfluencies classification accuracy was found to be improved by applying statistical normalization before feature extraction. The experimental investigation elucidated LPC, LPCC and WLPCC features can be used for identifying the stuttered events and WLPCC features slightly outperforms LPCC features and LPC features.

  10. The impact of OCR accuracy on automated cancer classification of pathology reports.

    PubMed

    Zuccon, Guido; Nguyen, Anthony N; Bergheim, Anton; Wickman, Sandra; Grayson, Narelle

    2012-01-01

    To evaluate the effects of Optical Character Recognition (OCR) on the automatic cancer classification of pathology reports. Scanned images of pathology reports were converted to electronic free-text using a commercial OCR system. A state-of-the-art cancer classification system, the Medical Text Extraction (MEDTEX) system, was used to automatically classify the OCR reports. Classifications produced by MEDTEX on the OCR versions of the reports were compared with the classification from a human amended version of the OCR reports. The employed OCR system was found to recognise scanned pathology reports with up to 99.12% character accuracy and up to 98.95% word accuracy. Errors in the OCR processing were found to minimally impact on the automatic classification of scanned pathology reports into notifiable groups. However, the impact of OCR errors is not negligible when considering the extraction of cancer notification items, such as primary site, histological type, etc. The automatic cancer classification system used in this work, MEDTEX, has proven to be robust to errors produced by the acquisition of freetext pathology reports from scanned images through OCR software. However, issues emerge when considering the extraction of cancer notification items.

  11. Classification of visible and infrared hyperspectral images based on image segmentation and edge-preserving filtering

    NASA Astrophysics Data System (ADS)

    Cui, Binge; Ma, Xiudan; Xie, Xiaoyun; Ren, Guangbo; Ma, Yi

    2017-03-01

    The classification of hyperspectral images with a few labeled samples is a major challenge which is difficult to meet unless some spatial characteristics can be exploited. In this study, we proposed a novel spectral-spatial hyperspectral image classification method that exploited spatial autocorrelation of hyperspectral images. First, image segmentation is performed on the hyperspectral image to assign each pixel to a homogeneous region. Second, the visible and infrared bands of hyperspectral image are partitioned into multiple subsets of adjacent bands, and each subset is merged into one band. Recursive edge-preserving filtering is performed on each merged band which utilizes the spectral information of neighborhood pixels. Third, the resulting spectral and spatial feature band set is classified using the SVM classifier. Finally, bilateral filtering is performed to remove "salt-and-pepper" noise in the classification result. To preserve the spatial structure of hyperspectral image, edge-preserving filtering is applied independently before and after the classification process. Experimental results on different hyperspectral images prove that the proposed spectral-spatial classification approach is robust and offers more classification accuracy than state-of-the-art methods when the number of labeled samples is small.

  12. Rank preserving sparse learning for Kinect based scene classification.

    PubMed

    Tao, Dapeng; Jin, Lianwen; Yang, Zhao; Li, Xuelong

    2013-10-01

    With the rapid development of the RGB-D sensors and the promptly growing population of the low-cost Microsoft Kinect sensor, scene classification, which is a hard, yet important, problem in computer vision, has gained a resurgence of interest recently. That is because the depth of information provided by the Kinect sensor opens an effective and innovative way for scene classification. In this paper, we propose a new scheme for scene classification, which applies locality-constrained linear coding (LLC) to local SIFT features for representing the RGB-D samples and classifies scenes through the cooperation between a new rank preserving sparse learning (RPSL) based dimension reduction and a simple classification method. RPSL considers four aspects: 1) it preserves the rank order information of the within-class samples in a local patch; 2) it maximizes the margin between the between-class samples on the local patch; 3) the L1-norm penalty is introduced to obtain the parsimony property; and 4) it models the classification error minimization by utilizing the least-squares error minimization. Experiments are conducted on the NYU Depth V1 dataset and demonstrate the robustness and effectiveness of RPSL for scene classification.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hong; Zeng, Hong; Lam, Robert

    Mismatch repair prevents the accumulation of erroneous insertions/deletions and non-Watson–Crick base pairs in the genome. Pathogenic mutations in theMLH1gene are associated with a predisposition to Lynch and Turcot's syndromes. Although genetic testing for these mutations is available, robust classification of variants requires strong clinical and functional support. Here, the first structure of the N-terminus of human MLH1, determined by X-ray crystallography, is described. Lastly, the structure shares a high degree of similarity with previously determined prokaryoticMLH1homologs; however, this structure affords a more accurate platform for the classification ofMLH1variants.

  14. Heterogeneous but “Standard” Coding Systems for Adverse Events: Issues in Achieving Interoperability between Apples and Oranges

    PubMed Central

    Richesson, Rachel L.; Fung, Kin Wah; Krischer, Jeffrey P.

    2008-01-01

    Monitoring adverse events (AEs) is an important part of clinical research and a crucial target for data standards. The representation of adverse events themselves requires the use of controlled vocabularies with thousands of needed clinical concepts. Several data standards for adverse events currently exist, each with a strong user base. The structure and features of these current adverse event data standards (including terminologies and classifications) are different, so comparisons and evaluations are not straightforward, nor are strategies for their harmonization. Three different data standards - the Medical Dictionary for Regulatory Activities (MedDRA) and the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) terminologies, and Common Terminology Criteria for Adverse Events (CTCAE) classification - are explored as candidate representations for AEs. This paper describes the structural features of each coding system, their content and relationship to the Unified Medical Language System (UMLS), and unsettled issues for future interoperability of these standards. PMID:18406213

  15. Improving Prediction of Large-scale Regime Transitions

    NASA Astrophysics Data System (ADS)

    Gyakum, J. R.; Roebber, P.; Bosart, L. F.; Honor, A.; Bunker, E.; Low, Y.; Hart, J.; Bliankinshtein, N.; Kolly, A.; Atallah, E.; Huang, Y.

    2017-12-01

    Cool season atmospheric predictability over the CONUS on subseasonal times scales (1-4 weeks) is critically dependent upon the structure, configuration, and evolution of the North Pacific jet stream (NPJ). The NPJ can be perturbed on its tropical side on synoptic time scales by recurving and transitioning tropical cyclones (TCs) and on subseasonal time scales by longitudinally varying convection associated with the Madden-Julian Oscillation (MJO). Likewise, the NPJ can be perturbed on its poleward side on synoptic time scales by midlatitude and polar disturbances that originate over the Asian continent. These midlatitude and polar disturbances can often trigger downstream Rossby wave propagation across the North Pacific, North America, and the North Atlantic. The project team is investigating the following multiscale processes and features: the spatiotemporal distribution of cyclone clustering over the Northern Hemisphere; cyclone clustering as influenced by atmospheric blocking and the phases and amplitudes of the major teleconnection indices, ENSO and the MJO; composite and case study analyses of representative cyclone clustering events to establish the governing dynamics; regime change predictability horizons associated with cyclone clustering events; Arctic air mass generation and modification; life cycles of the MJO; and poleward heat and moisture transports of subtropical air masses. A critical component of the study is weather regime classification. These classifications are defined through: the spatiotemporal clustering of surface cyclogenesis; a general circulation metric combining data at 500-hPa and the dynamic tropopause; Self Organizing Maps (SOM), constructed from dynamic tropopause and 850 hPa equivalent potential temperature data. The resultant lattice of nodes is used to categorize synoptic classes and their predictability, as well as to determine the robustness of the CFSv2 model climate relative to observations. Transition pathways between these synoptic classes, both in the observations and the CFSv2, are investigated. At a future point in the project, the results from these multiscale investigations will be integrated in the form of a prediction tool for important variables (temperatures, precipitation and their extremes) for the 1-4 week timeframe.

  16. Application of Neural Networks for classification of Patau, Edwards, Down, Turner and Klinefelter Syndrome based on first trimester maternal serum screening data, ultrasonographic findings and patient demographics.

    PubMed

    Catic, Aida; Gurbeta, Lejla; Kurtovic-Kozaric, Amina; Mehmedbasic, Senad; Badnjevic, Almir

    2018-02-13

    The usage of Artificial Neural Networks (ANNs) for genome-enabled classifications and establishing genome-phenotype correlations have been investigated more extensively over the past few years. The reason for this is that ANNs are good approximates of complex functions, so classification can be performed without the need for explicitly defined input-output model. This engineering tool can be applied for optimization of existing methods for disease/syndrome classification. Cytogenetic and molecular analyses are the most frequent tests used in prenatal diagnostic for the early detection of Turner, Klinefelter, Patau, Edwards and Down syndrome. These procedures can be lengthy, repetitive; and often employ invasive techniques so a robust automated method for classifying and reporting prenatal diagnostics would greatly help the clinicians with their routine work. The database consisted of data collected from 2500 pregnant woman that came to the Institute of Gynecology, Infertility and Perinatology "Mehmedbasic" for routine antenatal care between January 2000 and December 2016. During first trimester all women were subject to screening test where values of maternal serum pregnancy-associated plasma protein A (PAPP-A) and free beta human chorionic gonadotropin (β-hCG) were measured. Also, fetal nuchal translucency thickness and the presence or absence of the nasal bone was observed using ultrasound. The architectures of linear feedforward and feedback neural networks were investigated for various training data distributions and number of neurons in hidden layer. Feedback neural network architecture out performed feedforward neural network architecture in predictive ability for all five aneuploidy prenatal syndrome classes. Feedforward neural network with 15 neurons in hidden layer achieved classification sensitivity of 92.00%. Classification sensitivity of feedback (Elman's) neural network was 99.00%. Average accuracy of feedforward neural network was 89.6% and for feedback was 98.8%. The results presented in this paper prove that an expert diagnostic system based on neural networks can be efficiently used for classification of five aneuploidy syndromes, covered with this study, based on first trimester maternal serum screening data, ultrasonographic findings and patient demographics. Developed Expert System proved to be simple, robust, and powerful in properly classifying prenatal aneuploidy syndromes.

  17. A new moonquake catalog from Apollo 17 geophone data

    NASA Astrophysics Data System (ADS)

    Dimech, Jesse-Lee; Knapmeyer-Endrun, Brigitte; Weber, Renee

    2017-04-01

    New lunar seismic events have been detected on geophone data from the Apollo 17 Lunar Seismic Profile Experiment (LSPE). This dataset is already known to contain an abundance of thermal seismic events, and potentially some meteorite impacts, but prior to this study only 26 days of LSPE "listening mode" data has been analysed. In this new analysis, additional listening mode data collected between August 1976 and April 1977 is incorporated. To the authors knowledge these 8-months of data have not yet been used to detect seismic moonquake events. The geophones in question are situated adjacent to the Apollo 17 site in the Taurus-Littrow valley, about 5.5 km east of Lee-Lincoln scarp, and between the North and South Massifs. Any of these features are potential seismic sources. We have used an event-detection and classification technique based on 'Hidden Markov Models' to automatically detect and categorize seismic signals, in order to objectively generate a seismic event catalog. Currently, 2.5 months of the 8-month listening mode dataset has been processed, totaling 14,338 detections. Of these, 672 detections (classification "n1") have a sharp onset with a steep risetime suggesting they occur close to the recording geophone. These events almost all occur in association with lunar sunrise over the span of 1-2 days. One possibility is that these events originate from the nearby Apollo 17 lunar lander due to rapid heating at sunrise. A further 10,004 detections (classification "d1") show strong diurnal periodicity, with detections increasing during the lunar day and reaching a peak at sunset, and therefore probably represent thermal events from the lunar regolith immediately surrounding the Apollo 17 landing site. The final 3662 detections (classification "d2") have emergent onsets and relatively long durations. These detections have peaks associated with lunar sunrise and sunset, but also sometimes have peaks at seemingly random times. Their source mechanism has not yet been investigated. It's possible that many of these are misclassified d1/n1 events, and further QC work needs to be undertaken. But it is also possible that many of these represent more distant thermal moonquakes e.g. from the North and South massif, or even the ridge adjacent to the Lee-Lincoln scarp. The unknown event spikes will be the subject of closer inspection once the HMM technique has been refined.

  18. Faster Trees: Strategies for Accelerated Training and Prediction of Random Forests for Classification of Polsar Images

    NASA Astrophysics Data System (ADS)

    Hänsch, Ronny; Hellwich, Olaf

    2018-04-01

    Random Forests have continuously proven to be one of the most accurate, robust, as well as efficient methods for the supervised classification of images in general and polarimetric synthetic aperture radar data in particular. While the majority of previous work focus on improving classification accuracy, we aim for accelerating the training of the classifier as well as its usage during prediction while maintaining its accuracy. Unlike other approaches we mainly consider algorithmic changes to stay as much as possible independent of platform and programming language. The final model achieves an approximately 60 times faster training and a 500 times faster prediction, while the accuracy is only marginally decreased by roughly 1 %.

  19. Experimental Investigation for Fault Diagnosis Based on a Hybrid Approach Using Wavelet Packet and Support Vector Classification

    PubMed Central

    Li, Pengfei; Jiang, Yongying; Xiang, Jiawei

    2014-01-01

    To deal with the difficulty to obtain a large number of fault samples under the practical condition for mechanical fault diagnosis, a hybrid method that combined wavelet packet decomposition and support vector classification (SVC) is proposed. The wavelet packet is employed to decompose the vibration signal to obtain the energy ratio in each frequency band. Taking energy ratios as feature vectors, the pattern recognition results are obtained by the SVC. The rolling bearing and gear fault diagnostic results of the typical experimental platform show that the present approach is robust to noise and has higher classification accuracy and, thus, provides a better way to diagnose mechanical faults under the condition of small fault samples. PMID:24688361

  20. A binary genetic programing model for teleconnection identification between global sea surface temperature and local maximum monthly rainfall events

    NASA Astrophysics Data System (ADS)

    Danandeh Mehr, Ali; Nourani, Vahid; Hrnjica, Bahrudin; Molajou, Amir

    2017-12-01

    The effectiveness of genetic programming (GP) for solving regression problems in hydrology has been recognized in recent studies. However, its capability to solve classification problems has not been sufficiently explored so far. This study develops and applies a novel classification-forecasting model, namely Binary GP (BGP), for teleconnection studies between sea surface temperature (SST) variations and maximum monthly rainfall (MMR) events. The BGP integrates certain types of data pre-processing and post-processing methods with conventional GP engine to enhance its ability to solve both regression and classification problems simultaneously. The model was trained and tested using SST series of Black Sea, Mediterranean Sea, and Red Sea as potential predictors as well as classified MMR events at two locations in Iran as predictand. Skill of the model was measured in regard to different rainfall thresholds and SST lags and compared to that of the hybrid decision tree-association rule (DTAR) model available in the literature. The results indicated that the proposed model can identify potential teleconnection signals of surrounding seas beneficial to long-term forecasting of the occurrence of the classified MMR events.

  1. Deep learning based beat event detection in action movie franchises

    NASA Astrophysics Data System (ADS)

    Ejaz, N.; Khan, U. A.; Martínez-del-Amor, M. A.; Sparenberg, H.

    2018-04-01

    Automatic understanding and interpretation of movies can be used in a variety of ways to semantically manage the massive volumes of movies data. "Action Movie Franchises" dataset is a collection of twenty Hollywood action movies from five famous franchises with ground truth annotations at shot and beat level of each movie. In this dataset, the annotations are provided for eleven semantic beat categories. In this work, we propose a deep learning based method to classify shots and beat-events on this dataset. The training dataset for each of the eleven beat categories is developed and then a Convolution Neural Network is trained. After finding the shot boundaries, key frames are extracted for each shot and then three classification labels are assigned to each key frame. The classification labels for each of the key frames in a particular shot are then used to assign a unique label to each shot. A simple sliding window based method is then used to group adjacent shots having the same label in order to find a particular beat event. The results of beat event classification are presented based on criteria of precision, recall, and F-measure. The results are compared with the existing technique and significant improvements are recorded.

  2. IARC use of oxidative stress as key mode of action characteristic for facilitating cancer classification: Glyphosate case example illustrating a lack of robustness in interpretative implementation.

    PubMed

    Bus, James S

    2017-06-01

    The International Agency for Research on Cancer (IARC) has formulated 10 key characteristics of human carcinogens to incorporate mechanistic data into cancer hazard classifications. The analysis used glyphosate as a case example to examine the robustness of IARC's determination of oxidative stress as "strong" evidence supporting a plausible cancer mechanism in humans. The IARC analysis primarily relied on 14 human/mammalian studies; 19 non-mammalian studies were uninformative of human cancer given the broad spectrum of test species and extensive use of formulations and aquatic testing. The mammalian studies had substantial experimental limitations for informing cancer mechanism including use of: single doses and time points; cytotoxic/toxic test doses; tissues not identified as potential cancer targets; glyphosate formulations or mixtures; technically limited oxidative stress biomarkers. The doses were many orders of magnitude higher than human exposures determined in human biomonitoring studies. The glyphosate case example reveals that the IARC evaluation fell substantially short of "strong" supporting evidence of oxidative stress as a plausible human cancer mechanism, and suggests that other IARC monographs relying on the 10 key characteristics approach should be similarly examined for a lack of robust data integration fundamental to reasonable mode of action evaluations. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Robust linear discriminant models to solve financial crisis in banking sectors

    NASA Astrophysics Data System (ADS)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Idris, Faoziah; Ali, Hazlina; Omar, Zurni

    2014-12-01

    Linear discriminant analysis (LDA) is a widely-used technique in patterns classification via an equation which will minimize the probability of misclassifying cases into their respective categories. However, the performance of classical estimators in LDA highly depends on the assumptions of normality and homoscedasticity. Several robust estimators in LDA such as Minimum Covariance Determinant (MCD), S-estimators and Minimum Volume Ellipsoid (MVE) are addressed by many authors to alleviate the problem of non-robustness of the classical estimates. In this paper, we investigate on the financial crisis of the Malaysian banking institutions using robust LDA and classical LDA methods. Our objective is to distinguish the "distress" and "non-distress" banks in Malaysia by using the LDA models. Hit ratio is used to validate the accuracy predictive of LDA models. The performance of LDA is evaluated by estimating the misclassification rate via apparent error rate. The results and comparisons show that the robust estimators provide a better performance than the classical estimators for LDA.

  4. Robust kernel representation with statistical local features for face recognition.

    PubMed

    Yang, Meng; Zhang, Lei; Shiu, Simon Chi-Keung; Zhang, David

    2013-06-01

    Factors such as misalignment, pose variation, and occlusion make robust face recognition a difficult problem. It is known that statistical features such as local binary pattern are effective for local feature extraction, whereas the recently proposed sparse or collaborative representation-based classification has shown interesting results in robust face recognition. In this paper, we propose a novel robust kernel representation model with statistical local features (SLF) for robust face recognition. Initially, multipartition max pooling is used to enhance the invariance of SLF to image registration error. Then, a kernel-based representation model is proposed to fully exploit the discrimination information embedded in the SLF, and robust regression is adopted to effectively handle the occlusion in face images. Extensive experiments are conducted on benchmark face databases, including extended Yale B, AR (A. Martinez and R. Benavente), multiple pose, illumination, and expression (multi-PIE), facial recognition technology (FERET), face recognition grand challenge (FRGC), and labeled faces in the wild (LFW), which have different variations of lighting, expression, pose, and occlusions, demonstrating the promising performance of the proposed method.

  5. Influence of variable selection on partial least squares discriminant analysis models for explosive residue classification

    NASA Astrophysics Data System (ADS)

    De Lucia, Frank C., Jr.; Gottfried, Jennifer L.

    2011-02-01

    Using a series of thirteen organic materials that includes novel high-nitrogen energetic materials, conventional organic military explosives, and benign organic materials, we have demonstrated the importance of variable selection for maximizing residue discrimination with partial least squares discriminant analysis (PLS-DA). We built several PLS-DA models using different variable sets based on laser induced breakdown spectroscopy (LIBS) spectra of the organic residues on an aluminum substrate under an argon atmosphere. The model classification results for each sample are presented and the influence of the variables on these results is discussed. We found that using the whole spectra as the data input for the PLS-DA model gave the best results. However, variables due to the surrounding atmosphere and the substrate contribute to discrimination when the whole spectra are used, indicating this may not be the most robust model. Further iterative testing with additional validation data sets is necessary to determine the most robust model.

  6. Robust representation and recognition of facial emotions using extreme sparse learning.

    PubMed

    Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang

    2015-07-01

    Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.

  7. ELUCIDATING BRAIN CONNECTIVITY NETWORKS IN MAJOR DEPRESSIVE DISORDER USING CLASSIFICATION-BASED SCORING.

    PubMed

    Sacchet, Matthew D; Prasad, Gautam; Foland-Ross, Lara C; Thompson, Paul M; Gotlib, Ian H

    2014-04-01

    Graph theory is increasingly used in the field of neuroscience to understand the large-scale network structure of the human brain. There is also considerable interest in applying machine learning techniques in clinical settings, for example, to make diagnoses or predict treatment outcomes. Here we used support-vector machines (SVMs), in conjunction with whole-brain tractography, to identify graph metrics that best differentiate individuals with Major Depressive Disorder (MDD) from nondepressed controls. To do this, we applied a novel feature-scoring procedure that incorporates iterative classifier performance to assess feature robustness. We found that small-worldness , a measure of the balance between global integration and local specialization, most reliably differentiated MDD from nondepressed individuals. Post-hoc regional analyses suggested that heightened connectivity of the subcallosal cingulate gyrus (SCG) in MDDs contributes to these differences. The current study provides a novel way to assess the robustness of classification features and reveals anomalies in large-scale neural networks in MDD.

  8. The synchronous neural interactions test as a functional neuromarker for post-traumatic stress disorder (PTSD): a robust classification method based on the bootstrap

    NASA Astrophysics Data System (ADS)

    Georgopoulos, A. P.; Tan, H.-R. M.; Lewis, S. M.; Leuthold, A. C.; Winskowski, A. M.; Lynch, J. K.; Engdahl, B.

    2010-02-01

    Traumatic experiences can produce post-traumatic stress disorder (PTSD) which is a debilitating condition and for which no biomarker currently exists (Institute of Medicine (US) 2006 Posttraumatic Stress Disorder: Diagnosis and Assessment (Washington, DC: National Academies)). Here we show that the synchronous neural interactions (SNI) test which assesses the functional interactions among neural populations derived from magnetoencephalographic (MEG) recordings (Georgopoulos A P et al 2007 J. Neural Eng. 4 349-55) can successfully differentiate PTSD patients from healthy control subjects. Externally cross-validated, bootstrap-based analyses yielded >90% overall accuracy of classification. In addition, all but one of 18 patients who were not receiving medications for their disease were correctly classified. Altogether, these findings document robust differences in brain function between the PTSD and control groups that can be used for differential diagnosis and which possess the potential for assessing and monitoring disease progression and effects of therapy.

  9. Multi-voxel patterns of visual category representation during episodic encoding are predictive of subsequent memory

    PubMed Central

    Kuhl, Brice A.; Rissman, Jesse; Wagner, Anthony D.

    2012-01-01

    Successful encoding of episodic memories is thought to depend on contributions from prefrontal and temporal lobe structures. Neural processes that contribute to successful encoding have been extensively explored through univariate analyses of neuroimaging data that compare mean activity levels elicited during the encoding of events that are subsequently remembered vs. those subsequently forgotten. Here, we applied pattern classification to fMRI data to assess the degree to which distributed patterns of activity within prefrontal and temporal lobe structures elicited during the encoding of word-image pairs were diagnostic of the visual category (Face or Scene) of the encoded image. We then assessed whether representation of category information was predictive of subsequent memory. Classification analyses indicated that temporal lobe structures contained information robustly diagnostic of visual category. Information in prefrontal cortex was less diagnostic of visual category, but was nonetheless associated with highly reliable classifier-based evidence for category representation. Critically, trials associated with greater classifier-based estimates of category representation in temporal and prefrontal regions were associated with a higher probability of subsequent remembering. Finally, consideration of trial-by-trial variance in classifier-based measures of category representation revealed positive correlations between prefrontal and temporal lobe representations, with the strength of these correlations varying as a function of the category of image being encoded. Together, these results indicate that multi-voxel representations of encoded information can provide unique insights into how visual experiences are transformed into episodic memories. PMID:21925190

  10. An automated and fast approach to detect single-trial visual evoked potentials with application to brain-computer interface.

    PubMed

    Tu, Yiheng; Hung, Yeung Sam; Hu, Li; Huang, Gan; Hu, Yong; Zhang, Zhiguo

    2014-12-01

    This study aims (1) to develop an automated and fast approach for detecting visual evoked potentials (VEPs) in single trials and (2) to apply the single-trial VEP detection approach in designing a real-time and high-performance brain-computer interface (BCI) system. The single-trial VEP detection approach uses common spatial pattern (CSP) as a spatial filter and wavelet filtering (WF) a temporal-spectral filter to jointly enhance the signal-to-noise ratio (SNR) of single-trial VEPs. The performance of the joint spatial-temporal-spectral filtering approach was assessed in a four-command VEP-based BCI system. The offline classification accuracy of the BCI system was significantly improved from 67.6±12.5% (raw data) to 97.3±2.1% (data filtered by CSP and WF). The proposed approach was successfully implemented in an online BCI system, where subjects could make 20 decisions in one minute with classification accuracy of 90%. The proposed single-trial detection approach is able to obtain robust and reliable VEP waveform in an automatic and fast way and it is applicable in VEP based online BCI systems. This approach provides a real-time and automated solution for single-trial detection of evoked potentials or event-related potentials (EPs/ERPs) in various paradigms, which could benefit many applications such as BCI and intraoperative monitoring. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  11. Pattern recognition applied to seismic signals of Llaima volcano (Chile): An evaluation of station-dependent classifiers

    NASA Astrophysics Data System (ADS)

    Curilem, Millaray; Huenupan, Fernando; Beltrán, Daniel; San Martin, Cesar; Fuentealba, Gustavo; Franco, Luis; Cardona, Carlos; Acuña, Gonzalo; Chacón, Max; Khan, M. Salman; Becerra Yoma, Nestor

    2016-04-01

    Automatic pattern recognition applied to seismic signals from volcanoes may assist seismic monitoring by reducing the workload of analysts, allowing them to focus on more challenging activities, such as producing reports, implementing models, and understanding volcanic behaviour. In a previous work, we proposed a structure for automatic classification of seismic events in Llaima volcano, one of the most active volcanoes in the Southern Andes, located in the Araucanía Region of Chile. A database of events taken from three monitoring stations on the volcano was used to create a classification structure, independent of which station provided the signal. The database included three types of volcanic events: tremor, long period, and volcano-tectonic and a contrast group which contains other types of seismic signals. In the present work, we maintain the same classification scheme, but we consider separately the stations information in order to assess whether the complementary information provided by different stations improves the performance of the classifier in recognising seismic patterns. This paper proposes two strategies for combining the information from the stations: i) combining the features extracted from the signals from each station and ii) combining the classifiers of each station. In the first case, the features extracted from the signals from each station are combined forming the input for a single classification structure. In the second, a decision stage combines the results of the classifiers for each station to give a unique output. The results confirm that the station-dependent strategies that combine the features and the classifiers from several stations improves the classification performance, and that the combination of the features provides the best performance. The results show an average improvement of 9% in the classification accuracy when compared with the station-independent method.

  12. Robust optical sensors for safety critical automotive applications

    NASA Astrophysics Data System (ADS)

    De Locht, Cliff; De Knibber, Sven; Maddalena, Sam

    2008-02-01

    Optical sensors for the automotive industry need to be robust, high performing and low cost. This paper focuses on the impact of automotive requirements on optical sensor design and packaging. Main strategies to lower optical sensor entry barriers in the automotive market include: Perform sensor calibration and tuning by the sensor manufacturer, sensor test modes on chip to guarantee functional integrity at operation, and package technology is key. As a conclusion, optical sensor applications are growing in automotive. Optical sensor robustness matured to the level of safety critical applications like Electrical Power Assisted Steering (EPAS) and Drive-by-Wire by optical linear arrays based systems and Automated Cruise Control (ACC), Lane Change Assist and Driver Classification/Smart Airbag Deployment by camera imagers based systems.

  13. Fast traffic sign recognition with a rotation invariant binary pattern based feature.

    PubMed

    Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun

    2015-01-19

    Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed.

  14. Robust lane detection and tracking using multiple visual cues under stochastic lane shape conditions

    NASA Astrophysics Data System (ADS)

    Huang, Zhi; Fan, Baozheng; Song, Xiaolin

    2018-03-01

    As one of the essential components of environment perception techniques for an intelligent vehicle, lane detection is confronted with challenges including robustness against the complicated disturbance and illumination, also adaptability to stochastic lane shapes. To overcome these issues, we proposed a robust lane detection method named classification-generation-growth-based (CGG) operator to the detected lines, whereby the linear lane markings are identified by synergizing multiple visual cues with the a priori knowledge and spatial-temporal information. According to the quality of linear lane fitting, the linear and linear-parabolic models are dynamically switched to describe the actual lane. The Kalman filter with adaptive noise covariance and the region of interests (ROI) tracking are applied to improve the robustness and efficiency. Experiments were conducted with images covering various challenging scenarios. The experimental results evaluate the effectiveness of the presented method for complicated disturbances, illumination, and stochastic lane shapes.

  15. Fast Traffic Sign Recognition with a Rotation Invariant Binary Pattern Based Feature

    PubMed Central

    Yin, Shouyi; Ouyang, Peng; Liu, Leibo; Guo, Yike; Wei, Shaojun

    2015-01-01

    Robust and fast traffic sign recognition is very important but difficult for safe driving assistance systems. This study addresses fast and robust traffic sign recognition to enhance driving safety. The proposed method includes three stages. First, a typical Hough transformation is adopted to implement coarse-grained location of the candidate regions of traffic signs. Second, a RIBP (Rotation Invariant Binary Pattern) based feature in the affine and Gaussian space is proposed to reduce the time of traffic sign detection and achieve robust traffic sign detection in terms of scale, rotation, and illumination. Third, the techniques of ANN (Artificial Neutral Network) based feature dimension reduction and classification are designed to reduce the traffic sign recognition time. Compared with the current work, the experimental results in the public datasets show that this work achieves robustness in traffic sign recognition with comparable recognition accuracy and faster processing speed, including training speed and recognition speed. PMID:25608217

  16. A reductionist approach to extract robust molecular markers from microarray data series - Isolating markers to track osseointegration.

    PubMed

    Barik, Anwesha; Banerjee, Satarupa; Dhara, Santanu; Chakravorty, Nishant

    2017-04-01

    Complexities in the full genome expression studies hinder the extraction of tracker genes to analyze the course of biological events. In this study, we demonstrate the applications of supervised machine learning methods to reduce the irrelevance in microarray data series and thereby extract robust molecular markers to track biological processes. The methodology has been illustrated by analyzing whole genome expression studies on bone-implant integration (ossointegration). Being a biological process, osseointegration is known to leave a trail of genetic footprint during the course. In spite of existence of enormous amount of raw data in public repositories, researchers still do not have access to a panel of genes that can definitively track osseointegration. The results from our study revealed panels comprising of matrix metalloproteinases and collagen genes were able to track osseointegration on implant surfaces (MMP9 and COL1A2 on micro-textured; MMP12 and COL6A3 on superimposed nano-textured surfaces) with 100% classification accuracy, specificity and sensitivity. Further, our analysis showed the importance of the progression of the duration in establishment of the mechanical connection at bone-implant surface. The findings from this study are expected to be useful to researchers investigating osseointegration of novel implant materials especially at the early stage. The methodology demonstrated can be easily adapted by scientists in different fields to analyze large databases for other biological processes. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Influence of Climate Oscillations on Extreme Precipitation in Texas

    NASA Astrophysics Data System (ADS)

    Bhatia, N.; Singh, V. P.; Srivastav, R. K.

    2016-12-01

    Much research in the field of hydroclimatology is focusing on the impact of climate variability on hydrologic extremes. Recent studies show that the unique geographical location and the enormous areal extent, coupled with extensive variations in climate oscillations, have intensified the regional hydrologic cycle of Texas. The state-wide extreme precipitation events can actually be attributed to sea-surface pressure and temperature anomalies, such as Bermuda High and Jet Streams, which are further triggered by such climate oscillations. This study aims to quantify the impact of five major Atlantic and Pacific Ocean related climate oscillations: (i) Atlantic Multidecadal Oscillation (AMO), (ii) North Atlantic Oscillation (NAO), (iii) Pacific Decadal Oscillation (PDO), (iv) Pacific North American Pattern (PNA), and (v) Southern Oscillation Index (SOI), on extreme precipitation in Texas. Their respective effects will be determined for both climate divisions delineated by the National Climatic Data Centre (NCDC) and climate regions defined by the Köppen Climate Classification System. This study will adopt a weighted correlation approach to attain the robust correlation coefficients while addressing the regionally variable data outliers for extreme precipitation. Further, the variation of robust correlation coefficients across Texas is found to be related to the station elevation, historical average temperature, and total precipitation in the months of extremes. The research will shed light on the relationship between precipitation extremes and climate variability, thus aiding regional water boards in planning, designing, and managing the respective systems as per the future climate change.

  18. Real-Time Classification of Hand Motions Using Ultrasound Imaging of Forearm Muscles.

    PubMed

    Akhlaghi, Nima; Baker, Clayton A; Lahlou, Mohamed; Zafar, Hozaifah; Murthy, Karthik G; Rangwala, Huzefa S; Kosecka, Jana; Joiner, Wilsaan M; Pancrazio, Joseph J; Sikdar, Siddhartha

    2016-08-01

    Surface electromyography (sEMG) has been the predominant method for sensing electrical activity for a number of applications involving muscle-computer interfaces, including myoelectric control of prostheses and rehabilitation robots. Ultrasound imaging for sensing mechanical deformation of functional muscle compartments can overcome several limitations of sEMG, including the inability to differentiate between deep contiguous muscle compartments, low signal-to-noise ratio, and lack of a robust graded signal. The objective of this study was to evaluate the feasibility of real-time graded control using a computationally efficient method to differentiate between complex hand motions based on ultrasound imaging of forearm muscles. Dynamic ultrasound images of the forearm muscles were obtained from six able-bodied volunteers and analyzed to map muscle activity based on the deformation of the contracting muscles during different hand motions. Each participant performed 15 different hand motions, including digit flexion, different grips (i.e., power grasp and pinch grip), and grips in combination with wrist pronation. During the training phase, we generated a database of activity patterns corresponding to different hand motions for each participant. During the testing phase, novel activity patterns were classified using a nearest neighbor classification algorithm based on that database. The average classification accuracy was 91%. Real-time image-based control of a virtual hand showed an average classification accuracy of 92%. Our results demonstrate the feasibility of using ultrasound imaging as a robust muscle-computer interface. Potential clinical applications include control of multiarticulated prosthetic hands, stroke rehabilitation, and fundamental investigations of motor control and biomechanics.

  19. A space-based classification system for RF transients

    NASA Astrophysics Data System (ADS)

    Moore, K. R.; Call, D.; Johnson, S.; Payne, T.; Ford, W.; Spencer, K.; Wilkerson, J. F.; Baumgart, C.

    The FORTE (Fast On-Orbit Recording of Transient Events) small satellite is scheduled for launch in mid 1995. The mission is to measure and classify VHF (30-300 MHz) electromagnetic pulses, primarily due to lightning, within a high noise environment dominated by continuous wave carriers such as TV and FM stations. The FORTE Event Classifier will use specialized hardware to implement signal processing and neural network algorithms that perform onboard classification of RF transients and carriers. Lightning events will also be characterized with optical data telemetered to the ground. A primary mission science goal is to develop a comprehensive understanding of the correlation between the optical flash and the VHF emissions from lightning. By combining FORTE measurements with ground measurements and/or active transmitters, other science issues can be addressed. Examples include the correlation of global precipitation rates with lightning flash rates and location, the effects of large scale structures within the ionosphere (such as traveling ionospheric disturbances and horizontal gradients in the total electron content) on the propagation of broad bandwidth RF signals, and various areas of lightning physics. Event classification is a key feature of the FORTE mission. Neural networks are promising candidates for this application. The authors describe the proposed FORTE Event Classifier flight system, which consists of a commercially available digital signal processing board and a custom board, and discuss work on signal processing and neural network algorithms.

  20. Pavement Distress Evaluation Using 3D Depth Information from Stereo Vision

    DOT National Transportation Integrated Search

    2012-07-01

    The focus of the current project funded by MIOH-UTC for the period 9/1/2010-8/31/2011 is to : enhance our earlier effort in providing a more robust image processing based pavement distress : detection and classification system. During the last few de...

  1. Object-Based Land Use Classification of Agricultural Land by Coupling Multi-Temporal Spectral Characteristics and Phenological Events in Germany

    NASA Astrophysics Data System (ADS)

    Knoefel, Patrick; Loew, Fabian; Conrad, Christopher

    2015-04-01

    Crop maps based on classification of remotely sensed data are of increased attendance in agricultural management. This induces a more detailed knowledge about the reliability of such spatial information. However, classification of agricultural land use is often limited by high spectral similarities of the studied crop types. More, spatially and temporally varying agro-ecological conditions can introduce confusion in crop mapping. Classification errors in crop maps in turn may have influence on model outputs, like agricultural production monitoring. One major goal of the PhenoS project ("Phenological structuring to determine optimal acquisition dates for Sentinel-2 data for field crop classification"), is the detection of optimal phenological time windows for land cover classification purposes. Since many crop species are spectrally highly similar, accurate classification requires the right selection of satellite images for a certain classification task. In the course of one growing season, phenological phases exist where crops are separable with higher accuracies. For this purpose, coupling of multi-temporal spectral characteristics and phenological events is promising. The focus of this study is set on the separation of spectrally similar cereal crops like winter wheat, barley, and rye of two test sites in Germany called "Harz/Central German Lowland" and "Demmin". However, this study uses object based random forest (RF) classification to investigate the impact of image acquisition frequency and timing on crop classification uncertainty by permuting all possible combinations of available RapidEye time series recorded on the test sites between 2010 and 2014. The permutations were applied to different segmentation parameters. Then, classification uncertainty was assessed and analysed, based on the probabilistic soft-output from the RF algorithm at the per-field basis. From this soft output, entropy was calculated as a spatial measure of classification uncertainty. The results indicate that uncertainty estimates provide a valuable addition to traditional accuracy assessments and helps the user to allocate error in crop maps.

  2. Helmet-mounted acoustic array for hostile fire detection and localization in an urban environment

    NASA Astrophysics Data System (ADS)

    Scanlon, Michael V.

    2008-04-01

    The detection and localization of hostile weapons firing has been demonstrated successfully with acoustic sensor arrays on unattended ground sensors (UGS), ground-vehicles, and unmanned aerial vehicles (UAVs). Some of the more mature systems have demonstrated significant capabilities and provide direct support to ongoing counter-sniper operations. The Army Research Laboratory (ARL) is conducting research and development for a helmet-mounted system to acoustically detect and localize small arms firing, or other events such as RPG, mortars, and explosions, as well as other non-transient signatures. Since today's soldier is quickly being asked to take on more and more reconnaissance, surveillance, & target acquisition (RSTA) functions, sensor augmentation enables him to become a mobile and networked sensor node on the complex and dynamic battlefield. Having a body-worn threat detection and localization capability for events that pose an immediate danger to the soldiers around him can significantly enhance their survivability and lethality, as well as enable him to provide and use situational awareness clues on the networked battlefield. This paper addresses some of the difficulties encountered by an acoustic system in an urban environment. Complex reverberation, multipath, diffraction, and signature masking by building structures makes this a very harsh environment for robust detection and classification of shockwaves and muzzle blasts. Multifunctional acoustic detection arrays can provide persistent surveillance and enhanced situational awareness for every soldier.

  3. Impact of Compounding Error on Strategies for Subtyping Pathogenic Bacteria

    PubMed Central

    Orfe, Lisa; Davis, Margaret A.; Lafrentz, Stacey; Kang, Min-Su

    2008-01-01

    Abstract Comparative-omics will identify a multitude of markers that can be used for intraspecific discrimination between strains of bacteria. It seems intuitive that with this plethora of markers we can construct higher resolution subtyping assays using discrete markers to define strain “barcodes.” Unfortunately, with each new marker added to an assay, overall assay robustness declines because errors are compounded exponentially. For example, the difference in accuracy of strain classification for an assay with 60 markers will change from 99.9% to 54.7% when average probe accuracy declines from 99.999% to 99.0%. To illustrate this effect empirically, we constructed a 19 probe bead-array for subtyping Listeria monocytogenes and showed that despite seemingly reliable individual probe accuracy (>97%), our best classification results at the strain level were <75%. A more robust strategy would use as few markers as possible to achieve strain discrimination. Consequently, we developed two variable number of tandem repeat (VNTR) assays (Vibrio parahaemolyticus and L. monocytogenes) and demonstrate that these assays along with a published assay (Salmonella enterica) produce robust results when products were machine scored. The discriminatory ability with four to seven VNTR loci was comparable to pulsed-field gel electrophoresis. Passage experiments showed some instability with ca. 5% of passaged lines showing evidence for new alleles within 30 days (V. parahaemolyticus and S. enterica). Changes were limited to a single locus and allele so conservative rules can be used to determine strain matching. Most importantly, VNTRs appear robust and portable and can clearly discriminate between strains with relatively few loci thereby limiting effects of compounding error. PMID:18713065

  4. Automatic classification of seismic events within a regional seismograph network

    NASA Astrophysics Data System (ADS)

    Tiira, Timo; Kortström, Jari; Uski, Marja

    2015-04-01

    A fully automatic method for seismic event classification within a sparse regional seismograph network is presented. The tool is based on a supervised pattern recognition technique, Support Vector Machine (SVM), trained here to distinguish weak local earthquakes from a bulk of human-made or spurious seismic events. The classification rules rely on differences in signal energy distribution between natural and artificial seismic sources. Seismic records are divided into four windows, P, P coda, S, and S coda. For each signal window STA is computed in 20 narrow frequency bands between 1 and 41 Hz. The 80 discrimination parameters are used as a training data for the SVM. The SVM models are calculated for 19 on-line seismic stations in Finland. The event data are compiled mainly from fully automatic event solutions that are manually classified after automatic location process. The station-specific SVM training events include 11-302 positive (earthquake) and 227-1048 negative (non-earthquake) examples. The best voting rules for combining results from different stations are determined during an independent testing period. Finally, the network processing rules are applied to an independent evaluation period comprising 4681 fully automatic event determinations, of which 98 % have been manually identified as explosions or noise and 2 % as earthquakes. The SVM method correctly identifies 94 % of the non-earthquakes and all the earthquakes. The results imply that the SVM tool can identify and filter out blasts and spurious events from fully automatic event solutions with a high level of confidence. The tool helps to reduce work-load in manual seismic analysis by leaving only ~5 % of the automatic event determinations, i.e. the probable earthquakes for more detailed seismological analysis. The approach presented is easy to adjust to requirements of a denser or wider high-frequency network, once enough training examples for building a station-specific data set are available.

  5. Identification of Putative Cardiovascular System Developmental Toxicants using a Classification Model based on Signaling Pathway-Adverse Outcome Pathways

    EPA Science Inventory

    An important challenge for an integrative approach to developmental systems toxicology is associating putative molecular initiating events (MIEs), cell signaling pathways, cell function and modeled fetal exposure kinetics. We have developed a chemical classification model based o...

  6. Early warning, warning or alarm systems for natural hazards? A generic classification.

    NASA Astrophysics Data System (ADS)

    Sättele, Martina; Bründl, Michael; Straub, Daniel

    2013-04-01

    Early warning, warning and alarm systems have gained popularity in recent years as cost-efficient measures for dangerous natural hazard processes such as floods, storms, rock and snow avalanches, debris flows, rock and ice falls, landslides, flash floods, glacier lake outburst floods, forest fires and even earthquakes. These systems can generate information before an event causes loss of property and life. In this way, they mainly mitigate the overall risk by reducing the presence probability of endangered objects. These systems are typically prototypes tailored to specific project needs. Despite their importance there is no recognised system classification. This contribution classifies warning and alarm systems into three classes: i) threshold systems, ii) expert systems and iii) model-based expert systems. The result is a generic classification, which takes the characteristics of the natural hazard process itself and the related monitoring possibilities into account. The choice of the monitoring parameters directly determines the system's lead time. The classification of 52 active systems moreover revealed typical system characteristics for each system class. i) Threshold systems monitor dynamic process parameters of ongoing events (e.g. water level of a debris flow) and incorporate minor lead times. They have a local geographical coverage and a predefined threshold determines if an alarm is automatically activated to warn endangered objects, authorities and system operators. ii) Expert systems monitor direct changes in the variable disposition (e.g crack opening before a rock avalanche) or trigger events (e.g. heavy rain) at a local scale before the main event starts and thus offer extended lead times. The final alarm decision incorporates human, model and organisational related factors. iii) Model-based expert systems monitor indirect changes in the variable disposition (e.g. snow temperature, height or solar radiation that influence the occurrence probability of snow avalanches) or trigger events (e.g. heavy snow fall) to predict spontaneous hazard events in advance. They encompass regional or national measuring networks and satisfy additional demands such as the standardisation of the measuring stations. The developed classification and the characteristics, which were revealed for each class, yield a valuable input to quantifying the reliability of warning and alarm systems. Importantly, this will facilitate to compare them with well-established standard mitigation measures such as dams, nets and galleries within an integrated risk management approach.

  7. Support Vector Machine Model for Automatic Detection and Classification of Seismic Events

    NASA Astrophysics Data System (ADS)

    Barros, Vesna; Barros, Lucas

    2016-04-01

    The automated processing of multiple seismic signals to detect, localize and classify seismic events is a central tool in both natural hazards monitoring and nuclear treaty verification. However, false detections and missed detections caused by station noise and incorrect classification of arrivals are still an issue and the events are often unclassified or poorly classified. Thus, machine learning techniques can be used in automatic processing for classifying the huge database of seismic recordings and provide more confidence in the final output. Applied in the context of the International Monitoring System (IMS) - a global sensor network developed for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) - we propose a fully automatic method for seismic event detection and classification based on a supervised pattern recognition technique called the Support Vector Machine (SVM). According to Kortström et al., 2015, the advantages of using SVM are handleability of large number of features and effectiveness in high dimensional spaces. Our objective is to detect seismic events from one IMS seismic station located in an area of high seismicity and mining activity and classify them as earthquakes or quarry blasts. It is expected to create a flexible and easily adjustable SVM method that can be applied in different regions and datasets. Taken a step further, accurate results for seismic stations could lead to a modification of the model and its parameters to make it applicable to other waveform technologies used to monitor nuclear explosions such as infrasound and hydroacoustic waveforms. As an authorized user, we have direct access to all IMS data and bulletins through a secure signatory account. A set of significant seismic waveforms containing different types of events (e.g. earthquake, quarry blasts) and noise is being analysed to train the model and learn the typical pattern of the signal from these events. Moreover, comparing the performance of the support-vector network to various classical learning algorithms used before in seismic detection and classification is an essential final step to analyze the advantages and disadvantages of the model.

  8. Object-based classification of earthquake damage from high-resolution optical imagery using machine learning

    NASA Astrophysics Data System (ADS)

    Bialas, James; Oommen, Thomas; Rebbapragada, Umaa; Levin, Eugene

    2016-07-01

    Object-based approaches in the segmentation and classification of remotely sensed images yield more promising results compared to pixel-based approaches. However, the development of an object-based approach presents challenges in terms of algorithm selection and parameter tuning. Subjective methods are often used, but yield less than optimal results. Objective methods are warranted, especially for rapid deployment in time-sensitive applications, such as earthquake damage assessment. Herein, we used a systematic approach in evaluating object-based image segmentation and machine learning algorithms for the classification of earthquake damage in remotely sensed imagery. We tested a variety of algorithms and parameters on post-event aerial imagery for the 2011 earthquake in Christchurch, New Zealand. Results were compared against manually selected test cases representing different classes. In doing so, we can evaluate the effectiveness of the segmentation and classification of different classes and compare different levels of multistep image segmentations. Our classifier is compared against recent pixel-based and object-based classification studies for postevent imagery of earthquake damage. Our results show an improvement against both pixel-based and object-based methods for classifying earthquake damage in high resolution, post-event imagery.

  9. An Anomalous Noise Events Detector for Dynamic Road Traffic Noise Mapping in Real-Life Urban and Suburban Environments.

    PubMed

    Socoró, Joan Claudi; Alías, Francesc; Alsina-Pagès, Rosa Ma

    2017-10-12

    One of the main aspects affecting the quality of life of people living in urban and suburban areas is their continued exposure to high Road Traffic Noise (RTN) levels. Until now, noise measurements in cities have been performed by professionals, recording data in certain locations to build a noise map afterwards. However, the deployment of Wireless Acoustic Sensor Networks (WASN) has enabled automatic noise mapping in smart cities. In order to obtain a reliable picture of the RTN levels affecting citizens, Anomalous Noise Events (ANE) unrelated to road traffic should be removed from the noise map computation. To this aim, this paper introduces an Anomalous Noise Event Detector (ANED) designed to differentiate between RTN and ANE in real time within a predefined interval running on the distributed low-cost acoustic sensors of a WASN. The proposed ANED follows a two-class audio event detection and classification approach, instead of multi-class or one-class classification schemes, taking advantage of the collection of representative acoustic data in real-life environments. The experiments conducted within the DYNAMAP project, implemented on ARM-based acoustic sensors, show the feasibility of the proposal both in terms of computational cost and classification performance using standard Mel cepstral coefficients and Gaussian Mixture Models (GMM). The two-class GMM core classifier relatively improves the baseline universal GMM one-class classifier F1 measure by 18.7% and 31.8% for suburban and urban environments, respectively, within the 1-s integration interval. Nevertheless, according to the results, the classification performance of the current ANED implementation still has room for improvement.

  10. Classifying seismic waveforms from scratch: a case study in the alpine environment

    NASA Astrophysics Data System (ADS)

    Hammer, C.; Ohrnberger, M.; Fäh, D.

    2013-01-01

    Nowadays, an increasing amount of seismic data is collected by daily observatory routines. The basic step for successfully analyzing those data is the correct detection of various event types. However, the visually scanning process is a time-consuming task. Applying standard techniques for detection like the STA/LTA trigger still requires the manual control for classification. Here, we present a useful alternative. The incoming data stream is scanned automatically for events of interest. A stochastic classifier, called hidden Markov model, is learned for each class of interest enabling the recognition of highly variable waveforms. In contrast to other automatic techniques as neural networks or support vector machines the algorithm allows to start the classification from scratch as soon as interesting events are identified. Neither the tedious process of collecting training samples nor a time-consuming configuration of the classifier is required. An approach originally introduced for the volcanic task force action allows to learn classifier properties from a single waveform example and some hours of background recording. Besides a reduction of required workload this also enables to detect very rare events. Especially the latter feature provides a milestone point for the use of seismic devices in alpine warning systems. Furthermore, the system offers the opportunity to flag new signal classes that have not been defined before. We demonstrate the application of the classification system using a data set from the Swiss Seismological Survey achieving very high recognition rates. In detail we document all refinements of the classifier providing a step-by-step guide for the fast set up of a well-working classification system.

  11. A tool for determining duration of mortality events in archaeological assemblages using extant ungulate microwear

    PubMed Central

    Rivals, Florent; Prignano, Luce; Semprebon, Gina M.; Lozano, Sergi

    2015-01-01

    The seasonality of human occupations in archaeological sites is highly significant for the study of hominin behavioural ecology, in particular the hunting strategies for their main prey-ungulates. We propose a new tool to quantify such seasonality from tooth microwear patterns in a dataset of ten large samples of extant ungulates resulting from well-known mass mortality events. The tool is based on the combination of two measures of variability of scratch density, namely standard deviation and coefficient of variation. The integration of these two measurements of variability permits the classification of each case into one of the following three categories: (1) short events, (2) long-continued event and (3) two separated short events. The tool is tested on a selection of eleven fossil samples from five Palaeolithic localities in Western Europe which show a consistent classification in the three categories. The tool proposed here opens new doors to investigate seasonal patterns of ungulate accumulations in archaeological sites using non-destructive sampling. PMID:26616864

  12. New decision support tool for acute lymphoblastic leukemia classification

    NASA Astrophysics Data System (ADS)

    Madhukar, Monica; Agaian, Sos; Chronopoulos, Anthony T.

    2012-03-01

    In this paper, we build up a new decision support tool to improve treatment intensity choice in childhood ALL. The developed system includes different methods to accurately measure furthermore cell properties in microscope blood film images. The blood images are exposed to series of pre-processing steps which include color correlation, and contrast enhancement. By performing K-means clustering on the resultant images, the nuclei of the cells under consideration are obtained. Shape features and texture features are then extracted for classification. The system is further tested on the classification of spectra measured from the cell nuclei in blood samples in order to distinguish normal cells from those affected by Acute Lymphoblastic Leukemia. The results show that the proposed system robustly segments and classifies acute lymphoblastic leukemia based on complete microscopic blood images.

  13. Nonlinear features for product inspection

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1999-03-01

    Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non-invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data.

  14. Maximum-likelihood techniques for joint segmentation-classification of multispectral chromosome images.

    PubMed

    Schwartzkopf, Wade C; Bovik, Alan C; Evans, Brian L

    2005-12-01

    Traditional chromosome imaging has been limited to grayscale images, but recently a 5-fluorophore combinatorial labeling technique (M-FISH) was developed wherein each class of chromosomes binds with a different combination of fluorophores. This results in a multispectral image, where each class of chromosomes has distinct spectral components. In this paper, we develop new methods for automatic chromosome identification by exploiting the multispectral information in M-FISH chromosome images and by jointly performing chromosome segmentation and classification. We (1) develop a maximum-likelihood hypothesis test that uses multispectral information, together with conventional criteria, to select the best segmentation possibility; (2) use this likelihood function to combine chromosome segmentation and classification into a robust chromosome identification system; and (3) show that the proposed likelihood function can also be used as a reliable indicator of errors in segmentation, errors in classification, and chromosome anomalies, which can be indicators of radiation damage, cancer, and a wide variety of inherited diseases. We show that the proposed multispectral joint segmentation-classification method outperforms past grayscale segmentation methods when decomposing touching chromosomes. We also show that it outperforms past M-FISH classification techniques that do not use segmentation information.

  15. Beluga whale (Delphinapterus leucas) vocalizations and call classification from the eastern Beaufort Sea population.

    PubMed

    Garland, Ellen C; Castellote, Manuel; Berchok, Catherine L

    2015-06-01

    Beluga whales, Delphinapterus leucas, have a graded call system; call types exist on a continuum making classification challenging. A description of vocalizations from the eastern Beaufort Sea beluga population during its spring migration are presented here, using both a non-parametric classification tree analysis (CART), and a Random Forest analysis. Twelve frequency and duration measurements were made on 1019 calls recorded over 14 days off Icy Cape, Alaska, resulting in 34 identifiable call types with 83% agreement in classification for both CART and Random Forest analyses. This high level of agreement in classification, with an initial subjective classification of calls into 36 categories, demonstrates that the methods applied here provide a quantitative analysis of a graded call dataset. Further, as calls cannot be attributed to individuals using single sensor passive acoustic monitoring efforts, these methods provide a comprehensive analysis of data where the influence of pseudo-replication of calls from individuals is unknown. This study is the first to describe the vocal repertoire of a beluga population using a robust and repeatable methodology. A baseline eastern Beaufort Sea beluga population repertoire is presented here, against which the call repertoire of other seasonally sympatric Alaskan beluga populations can be compared.

  16. Ground-based cloud classification by learning stable local binary patterns

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Shi, Cunzhao; Wang, Chunheng; Xiao, Baihua

    2018-07-01

    Feature selection and extraction is the first step in implementing pattern classification. The same is true for ground-based cloud classification. Histogram features based on local binary patterns (LBPs) are widely used to classify texture images. However, the conventional uniform LBP approach cannot capture all the dominant patterns in cloud texture images, thereby resulting in low classification performance. In this study, a robust feature extraction method by learning stable LBPs is proposed based on the averaged ranks of the occurrence frequencies of all rotation invariant patterns defined in the LBPs of cloud images. The proposed method is validated with a ground-based cloud classification database comprising five cloud types. Experimental results demonstrate that the proposed method achieves significantly higher classification accuracy than the uniform LBP, local texture patterns (LTP), dominant LBP (DLBP), completed LBP (CLTP) and salient LBP (SaLBP) methods in this cloud image database and under different noise conditions. And the performance of the proposed method is comparable with that of the popular deep convolutional neural network (DCNN) method, but with less computation complexity. Furthermore, the proposed method also achieves superior performance on an independent test data set.

  17. The Cluster Sensitivity Index: A Basic Measure of Classification Robustness

    ERIC Educational Resources Information Center

    Hom, Willard C.

    2010-01-01

    Analysts of institutional performance have occasionally used a peer grouping approach in which they compared institutions only to other institutions with similar characteristics. Because analysts historically have used cluster analysis to define peer groups (i.e., the group of comparable institutions), the author proposes and demonstrates with…

  18. Feature selection methods for object-based classification of sub-decimeter resolution digital aerial imagery

    USDA-ARS?s Scientific Manuscript database

    Due to the availability of numerous spectral, spatial, and contextual features, the determination of optimal features and class separabilities can be a time consuming process in object-based image analysis (OBIA). While several feature selection methods have been developed to assist OBIA, a robust c...

  19. A complete solution classification and unified algorithmic treatment for the one- and two-step asymmetric S-transverse mass event scale statistic

    NASA Astrophysics Data System (ADS)

    Walker, Joel W.

    2014-08-01

    The M T2, or "s-transverse mass", statistic was developed to associate a parent mass scale to a missing transverse energy signature, given that escaping particles are generally expected in pairs, while collider experiments are sensitive to just a single transverse momentum vector sum. This document focuses on the generalized extension of that statistic to asymmetric one- and two-step decay chains, with arbitrary child particle masses and upstream missing transverse momentum. It provides a unified theoretical formulation, complete solution classification, taxonomy of critical points, and technical algorithmic prescription for treatment of the event scale. An implementation of the described algorithm is available for download, and is also a deployable component of the author's selection cut software package AEAC uS (Algorithmic Event Arbiter and C ut Selector). appendices address combinatoric event assembly, algorithm validation, and a complete pseudocode.

  20. PCA based feature reduction to improve the accuracy of decision tree c4.5 classification

    NASA Astrophysics Data System (ADS)

    Nasution, M. Z. F.; Sitompul, O. S.; Ramli, M.

    2018-03-01

    Splitting attribute is a major process in Decision Tree C4.5 classification. However, this process does not give a significant impact on the establishment of the decision tree in terms of removing irrelevant features. It is a major problem in decision tree classification process called over-fitting resulting from noisy data and irrelevant features. In turns, over-fitting creates misclassification and data imbalance. Many algorithms have been proposed to overcome misclassification and overfitting on classifications Decision Tree C4.5. Feature reduction is one of important issues in classification model which is intended to remove irrelevant data in order to improve accuracy. The feature reduction framework is used to simplify high dimensional data to low dimensional data with non-correlated attributes. In this research, we proposed a framework for selecting relevant and non-correlated feature subsets. We consider principal component analysis (PCA) for feature reduction to perform non-correlated feature selection and Decision Tree C4.5 algorithm for the classification. From the experiments conducted using available data sets from UCI Cervical cancer data set repository with 858 instances and 36 attributes, we evaluated the performance of our framework based on accuracy, specificity and precision. Experimental results show that our proposed framework is robust to enhance classification accuracy with 90.70% accuracy rates.

  1. Rough set soft computing cancer classification and network: one stone, two birds.

    PubMed

    Zhang, Yue

    2010-07-15

    Gene expression profiling provides tremendous information to help unravel the complexity of cancer. The selection of the most informative genes from huge noise for cancer classification has taken centre stage, along with predicting the function of such identified genes and the construction of direct gene regulatory networks at different system levels with a tuneable parameter. A new study by Wang and Gotoh described a novel Variable Precision Rough Sets-rooted robust soft computing method to successfully address these problems and has yielded some new insights. The significance of this progress and its perspectives will be discussed in this article.

  2. Comparison of remote sensing image processing techniques to identify tornado damage areas from Landsat TM data

    USGS Publications Warehouse

    Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and objectoriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. ?? 2008 by MDPI.

  3. Comparison of Remote Sensing Image Processing Techniques to Identify Tornado Damage Areas from Landsat TM Data

    PubMed Central

    Myint, Soe W.; Yuan, May; Cerveny, Randall S.; Giri, Chandra P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and object-oriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. PMID:27879757

  4. Expressions of Different-Trajectory Caused Motion Events in Chinese

    ERIC Educational Resources Information Center

    Paul, Jing Z.

    2013-01-01

    We perform motion events in all aspects of our daily life, from walking home to jumping into a pool, from throwing a frisbee to pushing a shopping cart. The fact that languages may encode such motion events in different fashions has raised intriguing questions regarding the typological classifications of natural languages in relation to…

  5. Yarn-dyed fabric defect classification based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Jing, Junfeng; Dong, Amei; Li, Pengfei

    2017-07-01

    Considering that the manual inspection of the yarn-dyed fabric can be time consuming and less efficient, a convolutional neural network (CNN) solution based on the modified AlexNet structure for the classification of the yarn-dyed fabric defect is proposed. CNN has powerful ability of feature extraction and feature fusion which can simulate the learning mechanism of the human brain. In order to enhance computational efficiency and detection accuracy, the local response normalization (LRN) layers in AlexNet are replaced by the batch normalization (BN) layers. In the process of the network training, through several convolution operations, the characteristics of the image are extracted step by step, and the essential features of the image can be obtained from the edge features. And the max pooling layers, the dropout layers, the fully connected layers are also employed in the classification model to reduce the computation cost and acquire more precise features of fabric defect. Finally, the results of the defect classification are predicted by the softmax function. The experimental results show the capability of defect classification via the modified Alexnet model and indicate its robustness.

  6. High resolution mapping and classification of oyster habitats in nearshore Louisiana using sidescan sonar

    USGS Publications Warehouse

    Allen, Y.C.; Wilson, C.A.; Roberts, H.H.; Supan, J.

    2005-01-01

    Sidescan sonar holds great promise as a tool to quantitatively depict the distribution and extent of benthic habitats in Louisiana's turbid estuaries. In this study, we describe an effective protocol for acoustic sampling in this environment. We also compared three methods of classification in detail: mean-based thresholding, supervised, and unsupervised techniques to classify sidescan imagery into categories of mud and shell. Classification results were compared to ground truth results using quadrat and dredge sampling. Supervised classification gave the best overall result (kappa = 75%) when compared to quadrat results. Classification accuracy was less robust when compared to all dredge samples (kappa = 21-56%), but increased greatly (90-100%) when only dredge samples taken from acoustically homogeneous areas were considered. Sidescan sonar when combined with ground truth sampling at an appropriate scale can be effectively used to establish an accurate substrate base map for both research applications and shellfish management. The sidescan imagery presented here also provides, for the first time, a detailed presentation of oyster habitat patchiness and scale in a productive oyster growing area.

  7. Classification of complex networks based on similarity of topological network features

    NASA Astrophysics Data System (ADS)

    Attar, Niousha; Aliakbary, Sadegh

    2017-09-01

    Over the past few decades, networks have been widely used to model real-world phenomena. Real-world networks exhibit nontrivial topological characteristics and therefore, many network models are proposed in the literature for generating graphs that are similar to real networks. Network models reproduce nontrivial properties such as long-tail degree distributions or high clustering coefficients. In this context, we encounter the problem of selecting the network model that best fits a given real-world network. The need for a model selection method reveals the network classification problem, in which a target-network is classified into one of the candidate network models. In this paper, we propose a novel network classification method which is independent of the network size and employs an alignment-free metric of network comparison. The proposed method is based on supervised machine learning algorithms and utilizes the topological similarities of networks for the classification task. The experiments show that the proposed method outperforms state-of-the-art methods with respect to classification accuracy, time efficiency, and robustness to noise.

  8. [New molecular classification of colorectal cancer, pancreatic cancer and stomach cancer: Towards "à la carte" treatment?].

    PubMed

    Dreyer, Chantal; Afchain, Pauline; Trouilloud, Isabelle; André, Thierry

    2016-01-01

    This review reports 3 of recently published molecular classifications of the 3 main gastro-intestinal cancers: gastric, pancreatic and colorectal adenocarcinoma. In colorectal adenocarcinoma, 6 independent classifications were combined to finally hold 4 molecular sub-groups, Consensus Molecular Subtypes (CMS 1-4), linked to various clinical, molecular and survival data. CMS1 (14% MSI with immune activation); CMS2 (37%: canonical with epithelial differentiation and activation of the WNT/MYC pathway); CMS3 (13% metabolic with epithelial differentiation and RAS mutation); CMS4 (23%: mesenchymal with activation of TGFβ pathway and angiogenesis with stromal invasion). In gastric adenocarcinoma, 4 groups were established: subtype "EBV" (9%, high frequency of PIK3CA mutations, hypermetylation and amplification of JAK2, PD-L1 and PD-L2), subtype "MSI" (22%, high rate of mutation), subtype "genomically stable tumor" (20%, diffuse histology type and mutations of RAS and genes encoding integrins and adhesion proteins including CDH1) and subtype "tumors with chromosomal instability" (50%, intestinal type, aneuploidy and receptor tyrosine kinase amplification). In pancreatic adenocarcinomas, a classification in four sub-groups has been proposed, stable subtype (20%, aneuploidy), locally rearranged subtype (30%, focal event on one or two chromosoms), scattered subtype (36%,<200 structural variation events), and unstable subtype (14%,>200 structural variation events, defects in DNA maintenance). Although currently away from the care of patients, these classifications open the way to "à la carte" treatment depending on molecular biology. Copyright © 2016 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  9. Automatic Detection and Classification of Unsafe Events During Power Wheelchair Use.

    PubMed

    Pineau, Joelle; Moghaddam, Athena K; Yuen, Hiu Kim; Archambault, Philippe S; Routhier, François; Michaud, François; Boissy, Patrick

    2014-01-01

    Using a powered wheelchair (PW) is a complex task requiring advanced perceptual and motor control skills. Unfortunately, PW incidents and accidents are not uncommon and their consequences can be serious. The objective of this paper is to develop technological tools that can be used to characterize a wheelchair user's driving behavior under various settings. In the experiments conducted, PWs are outfitted with a datalogging platform that records, in real-time, the 3-D acceleration of the PW. Data collection was conducted over 35 different activities, designed to capture a spectrum of PW driving events performed at different speeds (collisions with fixed or moving objects, rolling on incline plane, and rolling across multiple types obstacles). The data was processed using time-series analysis and data mining techniques, to automatically detect and identify the different events. We compared the classification accuracy using four different types of time-series features: 1) time-delay embeddings; 2) time-domain characterization; 3) frequency-domain features; and 4) wavelet transforms. In the analysis, we compared the classification accuracy obtained when distinguishing between safe and unsafe events during each of the 35 different activities. For the purposes of this study, unsafe events were defined as activities containing collisions against objects at different speed, and the remainder were defined as safe events. We were able to accurately detect 98% of unsafe events, with a low (12%) false positive rate, using only five examples of each activity. This proof-of-concept study shows that the proposed approach has the potential of capturing, based on limited input from embedded sensors, contextual information on PW use, and of automatically characterizing a user's PW driving behavior.

  10. Coefficient of variation for use in crop area classification across multiple climates

    NASA Astrophysics Data System (ADS)

    Whelen, Tracy; Siqueira, Paul

    2018-05-01

    In this study, the coefficient of variation (CV) is introduced as a unitless statistical measurement for the classification of croplands using synthetic aperture radar (SAR) data. As a measurement of change, the CV is able to capture changing backscatter responses caused by cycles of planting, growing, and harvesting, and thus is able to differentiate these areas from a more static forest or urban area. Pixels with CV values above a given threshold are classified as crops, and below the threshold are non-crops. This paper uses cross-polarized L-band SAR data from the ALOS PALSAR satellite to classify eleven regions across the United States, covering a wide range of major crops and climates. Two separate sets of classification were done, with the first targeting the optimum classification thresholds for each dataset, and the second using a generalized threshold for all datasets to simulate a large-scale operationalized situation. Overall accuracies for the first phase of classification ranged from 66%-81%, and 62%-84% for the second phase. Visual inspection of the results shows numerous possibilities for improving the classifications while still using the same classification method, including increasing the number and temporal frequency of input images in order to better capture phenological events and mitigate the effects of major precipitation events, as well as more accurate ground truth data. These improvements would make the CV method a viable tool for monitoring agriculture throughout the year on a global scale.

  11. Galaxy Mass Assembly with VLT & HST and lessons for E-ELT/MOSAIC

    NASA Astrophysics Data System (ADS)

    Hammer, François; Flores, Hector; Puech, Mathieu

    2015-02-01

    The fraction of distant disks and mergers is still debated, while 3D-spectroscopy is revolutionizing the field. However its limited spatial resolution imposes a complimentary HST imagery and a robust analysis procedure. When applied to observations of IMAGES galaxies at z = 0.4-0.8, it reveals that half of the spiral progenitors were in a merger phase, 6 billion year ago. The excellent correspondence between methodologically-based classifications of morphologies and kinematics definitively probes a violent origin of disk galaxies as proposed by Hammer et al. (2005). Examination of nearby galaxy outskirts reveals fossil imprints of such ancient merger events, under the form of well organized stellar streams. Perhaps our neighbor, M31, is the best illustration of an ancient merger, which modeling in 2010 leads to predict the gigantic plane of satellites discovered by Ibata et al. (2013). There are still a lot of discoveries to be done until the ELT era, which will open an avenue for detailed and accurate 3D-spectroscopy of galaxies from the earliest epochs to the present.

  12. Robustness of assembly supply chain networks by considering risk propagation and cascading failure

    NASA Astrophysics Data System (ADS)

    Tang, Liang; Jing, Ke; He, Jie; Stanley, H. Eugene

    2016-10-01

    An assembly supply chain network (ASCN) is composed of manufacturers located in different geographical regions. To analyze the robustness of this ASCN when it suffers from catastrophe disruption events, we construct a cascading failure model of risk propagation. In our model, different disruption scenarios s are considered and the probability equation of all disruption scenarios is developed. Using production capability loss as the robustness index (RI) of an ASCN, we conduct a numerical simulation to assess its robustness. Through simulation, we compare the network robustness at different values of linking intensity and node threshold and find that weak linking intensity or high node threshold increases the robustness of the ASCN. We also compare network robustness levels under different disruption scenarios.

  13. Multistation alarm system for eruptive activity based on the automatic classification of volcanic tremor: specifications and performance

    NASA Astrophysics Data System (ADS)

    Langer, Horst; Falsaperla, Susanna; Messina, Alfio; Spampinato, Salvatore

    2015-04-01

    With over fifty eruptive episodes (Strombolian activity, lava fountains, and lava flows) between 2006 and 2013, Mt Etna, Italy, underscored its role as the most active volcano in Europe. Seven paroxysmal lava fountains at the South East Crater occurred in 2007-2008 and 46 at the New South East Crater between 2011 and 2013. Month-lasting lava emissions affected the upper eastern flank of the volcano in 2006 and 2008-2009. On this background, effective monitoring and forecast of volcanic phenomena are a first order issue for their potential socio-economic impact in a densely populated region like the town of Catania and its surroundings. For example, explosive activity has often formed thick ash clouds with widespread tephra fall able to disrupt the air traffic, as well as to cause severe problems at infrastructures, such as highways and roads. For timely information on changes in the state of the volcano and possible onset of dangerous eruptive phenomena, the analysis of the continuous background seismic signal, the so-called volcanic tremor, turned out of paramount importance. Changes in the state of the volcano as well as in its eruptive style are usually concurrent with variations of the spectral characteristics (amplitude and frequency content) of tremor. The huge amount of digital data continuously acquired by INGV's broadband seismic stations every day makes a manual analysis difficult, and techniques of automatic classification of the tremor signal are therefore applied. The application of unsupervised classification techniques to the tremor data revealed significant changes well before the onset of the eruptive episodes. This evidence led to the development of specific software packages related to real-time processing of the tremor data. The operational characteristics of these tools - fail-safe, robustness with respect to noise and data outages, as well as computational efficiency - allowed the identification of criteria for automatic alarm flagging. The system is hitherto one of the main automatic alerting tools to identify impending eruptive events at Etna. The currently operating software named KKAnalysis is applied to the data stream continuously recorded at two seismic stations. The data are merged with reference datasets of past eruptive episodes. In doing so, the results of pattern classification can be immediately compared to previous eruptive scenarios. Given the rich material collected in recent years, here we propose the application of the alert system to a wider range (up to a total of eleven) stations at different elevations (1200-3050 m) and distances (1-8 km) from the summit craters. Critical alert parameters were empirically defined to obtain an optimal tuning of the alert system for each station. To verify the robustness of this new, multistation alert system, a dataset encompassing about eight years of continuous seismic records (since 2006) was processed automatically using KKAnalysis and collateral software offline. Then, we analyzed the performance of the classifier in terms of timing and spatial distribution of the stations.

  14. Net reclassification index at event rate: properties and relationships.

    PubMed

    Pencina, Michael J; Steyerberg, Ewout W; D'Agostino, Ralph B

    2017-12-10

    The net reclassification improvement (NRI) is an attractively simple summary measure quantifying improvement in performance because of addition of new risk marker(s) to a prediction model. Originally proposed for settings with well-established classification thresholds, it quickly extended into applications with no thresholds in common use. Here we aim to explore properties of the NRI at event rate. We express this NRI as a difference in performance measures for the new versus old model and show that the quantity underlying this difference is related to several global as well as decision analytic measures of model performance. It maximizes the relative utility (standardized net benefit) across all classification thresholds and can be viewed as the Kolmogorov-Smirnov distance between the distributions of risk among events and non-events. It can be expressed as a special case of the continuous NRI, measuring reclassification from the 'null' model with no predictors. It is also a criterion based on the value of information and quantifies the reduction in expected regret for a given regret function, casting the NRI at event rate as a measure of incremental reduction in expected regret. More generally, we find it informative to present plots of standardized net benefit/relative utility for the new versus old model across the domain of classification thresholds. Then, these plots can be summarized with their maximum values, and the increment in model performance can be described by the NRI at event rate. We provide theoretical examples and a clinical application on the evaluation of prognostic biomarkers for atrial fibrillation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Separation of Benign and Malicious Network Events for Accurate Malware Family Classification

    DTIC Science & Technology

    2015-09-28

    use Kullback - Leibler (KL) divergence [15] to measure the information ...related work in an important aspect concerning the order of events. We use n-grams to capture the order of events, which exposes richer information about...DISCUSSION Using n-grams on higher level network events helps under- stand the underlying operation of the malware, and provides a good feature set

  16. The challenge of monitoring elusive large carnivores: An accurate and cost-effective tool to identify and sex pumas (Puma concolor) from footprints.

    PubMed

    Alibhai, Sky; Jewell, Zoe; Evans, Jonah

    2017-01-01

    Acquiring reliable data on large felid populations is crucial for effective conservation and management. However, large felids, typically solitary, elusive and nocturnal, are difficult to survey. Tagging and following individuals with VHF or GPS technology is the standard approach, but costs are high and these methodologies can compromise animal welfare. Such limitations can restrict the use of these techniques at population or landscape levels. In this paper we describe a robust technique to identify and sex individual pumas from footprints. We used a standardized image collection protocol to collect a reference database of 535 footprints from 35 captive pumas over 10 facilities; 19 females (300 footprints) and 16 males (235 footprints), ranging in age from 1-20 yrs. Images were processed in JMP data visualization software, generating one hundred and twenty three measurements from each footprint. Data were analyzed using a customized model based on a pairwise trail comparison using robust cross-validated discriminant analysis with a Ward's clustering method. Classification accuracy was consistently > 90% for individuals, and for the correct classification of footprints within trails, and > 99% for sex classification. The technique has the potential to greatly augment the methods available for studying puma and other elusive felids, and is amenable to both citizen-science and opportunistic/local community data collection efforts, particularly as the data collection protocol is inexpensive and intuitive.

  17. Scoliosis curve type classification using kernel machine from 3D trunk image

    NASA Astrophysics Data System (ADS)

    Adankon, Mathias M.; Dansereau, Jean; Parent, Stefan; Labelle, Hubert; Cheriet, Farida

    2012-03-01

    Adolescent idiopathic scoliosis (AIS) is a deformity of the spine manifested by asymmetry and deformities of the external surface of the trunk. Classification of scoliosis deformities according to curve type is used to plan management of scoliosis patients. Currently, scoliosis curve type is determined based on X-ray exam. However, cumulative exposure to X-rays radiation significantly increases the risk for certain cancer. In this paper, we propose a robust system that can classify the scoliosis curve type from non invasive acquisition of 3D trunk surface of the patients. The 3D image of the trunk is divided into patches and local geometric descriptors characterizing the surface of the back are computed from each patch and forming the features. We perform the reduction of the dimensionality by using Principal Component Analysis and 53 components were retained. In this work a multi-class classifier is built with Least-squares support vector machine (LS-SVM) which is a kernel classifier. For this study, a new kernel was designed in order to achieve a robust classifier in comparison with polynomial and Gaussian kernel. The proposed system was validated using data of 103 patients with different scoliosis curve types diagnosed and classified by an orthopedic surgeon from the X-ray images. The average rate of successful classification was 93.3% with a better rate of prediction for the major thoracic and lumbar/thoracolumbar types.

  18. Recovery after treatment and sensitivity to base rate.

    PubMed

    Doctor, J N

    1999-04-01

    Accurate classification of patients as having recovered after psychotherapy depends largely on the base rate of such recovery. This article presents methods for classifying participants as recovered after therapy. The approach described here considers base rate in the statistical model. These methods can be applied to psychotherapy outcome data for 2 purposes: (a) to determine the robustness of a data set to differing base-rate assumptions and (b) to formulate an appropriate cutoff that is beyond the range of cases that are not robust to plausible base-rate assumptions. Discussion addresses a fundamental premise underlying the study of recovery after psychotherapy.

  19. Extended robust support vector machine based on financial risk minimization.

    PubMed

    Takeda, Akiko; Fujiwara, Shuhei; Kanamori, Takafumi

    2014-11-01

    Financial risk measures have been used recently in machine learning. For example, ν-support vector machine ν-SVM) minimizes the conditional value at risk (CVaR) of margin distribution. The measure is popular in finance because of the subadditivity property, but it is very sensitive to a few outliers in the tail of the distribution. We propose a new classification method, extended robust SVM (ER-SVM), which minimizes an intermediate risk measure between the CVaR and value at risk (VaR) by expecting that the resulting model becomes less sensitive than ν-SVM to outliers. We can regard ER-SVM as an extension of robust SVM, which uses a truncated hinge loss. Numerical experiments imply the ER-SVM's possibility of achieving a better prediction performance with proper parameter setting.

  20. Pedestrian Detection in Far-Infrared Daytime Images Using a Hierarchical Codebook of SURF

    PubMed Central

    Besbes, Bassem; Rogozan, Alexandrina; Rus, Adela-Maria; Bensrhair, Abdelaziz; Broggi, Alberto

    2015-01-01

    One of the main challenges in intelligent vehicles concerns pedestrian detection for driving assistance. Recent experiments have showed that state-of-the-art descriptors provide better performances on the far-infrared (FIR) spectrum than on the visible one, even in daytime conditions, for pedestrian classification. In this paper, we propose a pedestrian detector with on-board FIR camera. Our main contribution is the exploitation of the specific characteristics of FIR images to design a fast, scale-invariant and robust pedestrian detector. Our system consists of three modules, each based on speeded-up robust feature (SURF) matching. The first module allows generating regions-of-interest (ROI), since in FIR images of the pedestrian shapes may vary in large scales, but heads appear usually as light regions. ROI are detected with a high recall rate with the hierarchical codebook of SURF features located in head regions. The second module consists of pedestrian full-body classification by using SVM. This module allows one to enhance the precision with low computational cost. In the third module, we combine the mean shift algorithm with inter-frame scale-invariant SURF feature tracking to enhance the robustness of our system. The experimental evaluation shows that our system outperforms, in the FIR domain, the state-of-the-art Haar-like Adaboost-cascade, histogram of oriented gradients (HOG)/linear SVM (linSVM) and MultiFtrpedestrian detectors, trained on the FIR images. PMID:25871724

  1. Analysis of Traffic Signals on a Software-Defined Network for Detection and Classification of a Man-in-the-Middle Attack

    DTIC Science & Technology

    2017-09-01

    unique characteristics of reported anomalies in the collected traffic signals to build a classification framework. Other cyber events, such as a...Furthermore, we identify unique characteristics of reported anomalies in the collected traffic signals to build a classification framework. Other cyber...2]. The applications build flow rules using network topology information provided by the control plane [1]. Since the control plane is able to

  2. A Formal Specification and Verification Method for the Prevention of Denial of Service in Ada Services

    DTIC Science & Technology

    1988-03-01

    Mechanism; Computer Security. 16. PRICE CODE 17. SECURITY CLASSIFICATION IS. SECURITY CLASSIFICATION 19. SECURITY CLASSIFICATION 20. UMrrATION OF ABSTRACT...denial of service. This paper assumes that the reader is a computer science or engineering professional working in the area of formal specification and...recovery from such events as deadlocks and crashes can be accounted for in the computation of the waiting time for each service in the service hierarchy

  3. Deep learning decision fusion for the classification of urban remote sensing data

    NASA Astrophysics Data System (ADS)

    Abdi, Ghasem; Samadzadegan, Farhad; Reinartz, Peter

    2018-01-01

    Multisensor data fusion is one of the most common and popular remote sensing data classification topics by considering a robust and complete description about the objects of interest. Furthermore, deep feature extraction has recently attracted significant interest and has become a hot research topic in the geoscience and remote sensing research community. A deep learning decision fusion approach is presented to perform multisensor urban remote sensing data classification. After deep features are extracted by utilizing joint spectral-spatial information, a soft-decision made classifier is applied to train high-level feature representations and to fine-tune the deep learning framework. Next, a decision-level fusion classifies objects of interest by the joint use of sensors. Finally, a context-aware object-based postprocessing is used to enhance the classification results. A series of comparative experiments are conducted on the widely used dataset of 2014 IEEE GRSS data fusion contest. The obtained results illustrate the considerable advantages of the proposed deep learning decision fusion over the traditional classifiers.

  4. Active relearning for robust supervised classification of pulmonary emphysema

    NASA Astrophysics Data System (ADS)

    Raghunath, Sushravya; Rajagopalan, Srinivasan; Karwoski, Ronald A.; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Radiologists are adept at recognizing the appearance of lung parenchymal abnormalities in CT scans. However, the inconsistent differential diagnosis, due to subjective aggregation, mandates supervised classification. Towards optimizing Emphysema classification, we introduce a physician-in-the-loop feedback approach in order to minimize uncertainty in the selected training samples. Using multi-view inductive learning with the training samples, an ensemble of Support Vector Machine (SVM) models, each based on a specific pair-wise dissimilarity metric, was constructed in less than six seconds. In the active relearning phase, the ensemble-expert label conflicts were resolved by an expert. This just-in-time feedback with unoptimized SVMs yielded 15% increase in classification accuracy and 25% reduction in the number of support vectors. The generality of relearning was assessed in the optimized parameter space of six different classifiers across seven dissimilarity metrics. The resultant average accuracy improved to 21%. The co-operative feedback method proposed here could enhance both diagnostic and staging throughput efficiency in chest radiology practice.

  5. Classifier fusion for VoIP attacks classification

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Rezac, Filip

    2017-05-01

    SIP is one of the most successful protocols in the field of IP telephony communication. It establishes and manages VoIP calls. As the number of SIP implementation rises, we can expect a higher number of attacks on the communication system in the near future. This work aims at malicious SIP traffic classification. A number of various machine learning algorithms have been developed for attack classification. The paper presents a comparison of current research and the use of classifier fusion method leading to a potential decrease in classification error rate. Use of classifier combination makes a more robust solution without difficulties that may affect single algorithms. Different voting schemes, combination rules, and classifiers are discussed to improve the overall performance. All classifiers have been trained on real malicious traffic. The concept of traffic monitoring depends on the network of honeypot nodes. These honeypots run in several networks spread in different locations. Separation of honeypots allows us to gain an independent and trustworthy attack information.

  6. Forensic Discrimination of Latent Fingerprints Using Laser-Induced Breakdown Spectroscopy (LIBS) and Chemometric Approaches.

    PubMed

    Yang, Jun-Ho; Yoh, Jack J

    2018-01-01

    A novel technique is reported for separating overlapping latent fingerprints using chemometric approaches that combine laser-induced breakdown spectroscopy (LIBS) and multivariate analysis. The LIBS technique provides the capability of real time analysis and high frequency scanning as well as the data regarding the chemical composition of overlapping latent fingerprints. These spectra offer valuable information for the classification and reconstruction of overlapping latent fingerprints by implementing appropriate statistical multivariate analysis. The current study employs principal component analysis and partial least square methods for the classification of latent fingerprints from the LIBS spectra. This technique was successfully demonstrated through a classification study of four distinct latent fingerprints using classification methods such as soft independent modeling of class analogy (SIMCA) and partial least squares discriminant analysis (PLS-DA). The novel method yielded an accuracy of more than 85% and was proven to be sufficiently robust. Furthermore, through laser scanning analysis at a spatial interval of 125 µm, the overlapping fingerprints were reconstructed as separate two-dimensional forms.

  7. A comparison of different chemometrics approaches for the robust classification of electronic nose data.

    PubMed

    Gromski, Piotr S; Correa, Elon; Vaughan, Andrew A; Wedge, David C; Turner, Michael L; Goodacre, Royston

    2014-11-01

    Accurate detection of certain chemical vapours is important, as these may be diagnostic for the presence of weapons, drugs of misuse or disease. In order to achieve this, chemical sensors could be deployed remotely. However, the readout from such sensors is a multivariate pattern, and this needs to be interpreted robustly using powerful supervised learning methods. Therefore, in this study, we compared the classification accuracy of four pattern recognition algorithms which include linear discriminant analysis (LDA), partial least squares-discriminant analysis (PLS-DA), random forests (RF) and support vector machines (SVM) which employed four different kernels. For this purpose, we have used electronic nose (e-nose) sensor data (Wedge et al., Sensors Actuators B Chem 143:365-372, 2009). In order to allow direct comparison between our four different algorithms, we employed two model validation procedures based on either 10-fold cross-validation or bootstrapping. The results show that LDA (91.56% accuracy) and SVM with a polynomial kernel (91.66% accuracy) were very effective at analysing these e-nose data. These two models gave superior prediction accuracy, sensitivity and specificity in comparison to the other techniques employed. With respect to the e-nose sensor data studied here, our findings recommend that SVM with a polynomial kernel should be favoured as a classification method over the other statistical models that we assessed. SVM with non-linear kernels have the advantage that they can be used for classifying non-linear as well as linear mapping from analytical data space to multi-group classifications and would thus be a suitable algorithm for the analysis of most e-nose sensor data.

  8. Robust Classification of Small-Molecule Mechanism of Action Using a Minimalist High-Content Microscopy Screen and Multidimensional Phenotypic Trajectory Analysis

    PubMed Central

    Twarog, Nathaniel R.; Low, Jonathan A.; Currier, Duane G.; Miller, Greg; Chen, Taosheng; Shelat, Anang A.

    2016-01-01

    Phenotypic screening through high-content automated microscopy is a powerful tool for evaluating the mechanism of action of candidate therapeutics. Despite more than a decade of development, however, high content assays have yielded mixed results, identifying robust phenotypes in only a small subset of compound classes. This has led to a combinatorial explosion of assay techniques, analyzing cellular phenotypes across dozens of assays with hundreds of measurements. Here, using a minimalist three-stain assay and only 23 basic cellular measurements, we developed an analytical approach that leverages informative dimensions extracted by linear discriminant analysis to evaluate similarity between the phenotypic trajectories of different compounds in response to a range of doses. This method enabled us to visualize biologically-interpretable phenotypic tracks populated by compounds of similar mechanism of action, cluster compounds according to phenotypic similarity, and classify novel compounds by comparing them to phenotypically active exemplars. Hierarchical clustering applied to 154 compounds from over a dozen different mechanistic classes demonstrated tight agreement with published compound mechanism classification. Using 11 phenotypically active mechanism classes, classification was performed on all 154 compounds: 78% were correctly identified as belonging to one of the 11 exemplar classes or to a different unspecified class, with accuracy increasing to 89% when less phenotypically active compounds were excluded. Importantly, several apparent clustering and classification failures, including rigosertib and 5-fluoro-2’-deoxycytidine, instead revealed more complex mechanisms or off-target effects verified by more recent publications. These results show that a simple, easily replicated, minimalist high-content assay can reveal subtle variations in the cellular phenotype induced by compounds and can correctly predict mechanism of action, as long as the appropriate analytical tools are used. PMID:26886014

  9. Robust Classification of Small-Molecule Mechanism of Action Using a Minimalist High-Content Microscopy Screen and Multidimensional Phenotypic Trajectory Analysis.

    PubMed

    Twarog, Nathaniel R; Low, Jonathan A; Currier, Duane G; Miller, Greg; Chen, Taosheng; Shelat, Anang A

    2016-01-01

    Phenotypic screening through high-content automated microscopy is a powerful tool for evaluating the mechanism of action of candidate therapeutics. Despite more than a decade of development, however, high content assays have yielded mixed results, identifying robust phenotypes in only a small subset of compound classes. This has led to a combinatorial explosion of assay techniques, analyzing cellular phenotypes across dozens of assays with hundreds of measurements. Here, using a minimalist three-stain assay and only 23 basic cellular measurements, we developed an analytical approach that leverages informative dimensions extracted by linear discriminant analysis to evaluate similarity between the phenotypic trajectories of different compounds in response to a range of doses. This method enabled us to visualize biologically-interpretable phenotypic tracks populated by compounds of similar mechanism of action, cluster compounds according to phenotypic similarity, and classify novel compounds by comparing them to phenotypically active exemplars. Hierarchical clustering applied to 154 compounds from over a dozen different mechanistic classes demonstrated tight agreement with published compound mechanism classification. Using 11 phenotypically active mechanism classes, classification was performed on all 154 compounds: 78% were correctly identified as belonging to one of the 11 exemplar classes or to a different unspecified class, with accuracy increasing to 89% when less phenotypically active compounds were excluded. Importantly, several apparent clustering and classification failures, including rigosertib and 5-fluoro-2'-deoxycytidine, instead revealed more complex mechanisms or off-target effects verified by more recent publications. These results show that a simple, easily replicated, minimalist high-content assay can reveal subtle variations in the cellular phenotype induced by compounds and can correctly predict mechanism of action, as long as the appropriate analytical tools are used.

  10. Development of a brain MRI-based hidden Markov model for dementia recognition.

    PubMed

    Chen, Ying; Pham, Tuan D

    2013-01-01

    Dementia is an age-related cognitive decline which is indicated by an early degeneration of cortical and sub-cortical structures. Characterizing those morphological changes can help to understand the disease development and contribute to disease early prediction and prevention. But modeling that can best capture brain structural variability and can be valid in both disease classification and interpretation is extremely challenging. The current study aimed to establish a computational approach for modeling the magnetic resonance imaging (MRI)-based structural complexity of the brain using the framework of hidden Markov models (HMMs) for dementia recognition. Regularity dimension and semi-variogram were used to extract structural features of the brains, and vector quantization method was applied to convert extracted feature vectors to prototype vectors. The output VQ indices were then utilized to estimate parameters for HMMs. To validate its accuracy and robustness, experiments were carried out on individuals who were characterized as non-demented and mild Alzheimer's diseased. Four HMMs were constructed based on the cohort of non-demented young, middle-aged, elder and demented elder subjects separately. Classification was carried out using a data set including both non-demented and demented individuals with a wide age range. The proposed HMMs have succeeded in recognition of individual who has mild Alzheimer's disease and achieved a better classification accuracy compared to other related works using different classifiers. Results have shown the ability of the proposed modeling for recognition of early dementia. The findings from this research will allow individual classification to support the early diagnosis and prediction of dementia. By using the brain MRI-based HMMs developed in our proposed research, it will be more efficient, robust and can be easily used by clinicians as a computer-aid tool for validating imaging bio-markers for early prediction of dementia.

  11. Robust surface roughness indices and morphological interpretation

    NASA Astrophysics Data System (ADS)

    Trevisani, Sebastiano; Rocca, Michele

    2016-04-01

    Geostatistical-based image/surface texture indices based on variogram (Atkison and Lewis, 2000; Herzfeld and Higginson, 1996; Trevisani et al., 2012) and on its robust variant MAD (median absolute differences, Trevisani and Rocca, 2015) offer powerful tools for the analysis and interpretation of surface morphology (potentially not limited to solid earth). In particular, the proposed robust index (Trevisani and Rocca, 2015) with its implementation based on local kernels permits the derivation of a wide set of robust and customizable geomorphometric indices capable to outline specific aspects of surface texture. The stability of MAD in presence of signal noise and abrupt changes in spatial variability is well suited for the analysis of high-resolution digital terrain models. Moreover, the implementation of MAD by means of a pixel-centered perspective based on local kernels, with some analogies to the local binary pattern approach (Lucieer and Stein, 2005; Ojala et al., 2002), permits to create custom roughness indices capable to outline different aspects of surface roughness (Grohmann et al., 2011; Smith, 2015). In the proposed poster, some potentialities of the new indices in the context of geomorphometry and landscape analysis will be presented. At same time, challenges and future developments related to the proposed indices will be outlined. Atkinson, P.M., Lewis, P., 2000. Geostatistical classification for remote sensing: an introduction. Computers & Geosciences 26, 361-371. Grohmann, C.H., Smith, M.J., Riccomini, C., 2011. Multiscale Analysis of Topographic Surface Roughness in the Midland Valley, Scotland. IEEE Transactions on Geoscience and Remote Sensing 49, 1220-1213. Herzfeld, U.C., Higginson, C.A., 1996. Automated geostatistical seafloor classification - Principles, parameters, feature vectors, and discrimination criteria. Computers and Geosciences, 22 (1), pp. 35-52. Lucieer, A., Stein, A., 2005. Texture-based landform segmentation of LiDAR imagery. International Journal of Applied Earth Observation and Geoinformation 6, 261-270. Ojala, T., Pietikäinen, M. & Mäenpää, T. 2002. "Multiresolution gray-scale and rotation invariant texture classification with local binary patterns", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987. Smith, M.W. 2014. "Roughness in the Earth Sciences", Earth-Science Reviews, vol. 136, pp. 202-225. Trevisani, S., Cavalli, M. & Marchi, L. 2012. "Surface texture analysis of a high-resolution DTM: Interpreting an alpine basin", Geomorphology, vol. 161-162, pp. 26-39. Trevisani, S., Rocca, M. 2015. MAD: robust image texture analysis for applications in high resolution geomorphometry. Comput. Geosci. 81, 78-92. doi:10.1016/j.cageo.2015.04.003.

  12. A comparison of three feature selection methods for object-based classification of sub-decimeter resolution UltraCam-L imagery

    USDA-ARS?s Scientific Manuscript database

    The availability of numerous spectral, spatial, and contextual features with object-based image analysis (OBIA) renders the selection of optimal features a time consuming and subjective process. While several feature election methods have been used in conjunction with OBIA, a robust comparison of th...

  13. Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification.

    PubMed

    Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried; De Vos, Winnok H

    2017-01-01

    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows.

  14. Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification

    PubMed Central

    Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried

    2017-01-01

    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows. PMID:28125723

  15. Automatic segmentation of multimodal brain tumor images based on classification of super-voxels.

    PubMed

    Kadkhodaei, M; Samavi, S; Karimi, N; Mohaghegh, H; Soroushmehr, S M R; Ward, K; All, A; Najarian, K

    2016-08-01

    Despite the rapid growth in brain tumor segmentation approaches, there are still many challenges in this field. Automatic segmentation of brain images has a critical role in decreasing the burden of manual labeling and increasing robustness of brain tumor diagnosis. We consider segmentation of glioma tumors, which have a wide variation in size, shape and appearance properties. In this paper images are enhanced and normalized to same scale in a preprocessing step. The enhanced images are then segmented based on their intensities using 3D super-voxels. Usually in images a tumor region can be regarded as a salient object. Inspired by this observation, we propose a new feature which uses a saliency detection algorithm. An edge-aware filtering technique is employed to align edges of the original image to the saliency map which enhances the boundaries of the tumor. Then, for classification of tumors in brain images, a set of robust texture features are extracted from super-voxels. Experimental results indicate that our proposed method outperforms a comparable state-of-the-art algorithm in term of dice score.

  16. Estimation of source location and ground impedance using a hybrid multiple signal classification and Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung

    2016-07-01

    A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.

  17. The road map towards providing a robust Raman spectroscopy-based cancer diagnostic platform and integration into clinic

    NASA Astrophysics Data System (ADS)

    Lau, Katherine; Isabelle, Martin; Lloyd, Gavin R.; Old, Oliver; Shepherd, Neil; Bell, Ian M.; Dorney, Jennifer; Lewis, Aaran; Gaifulina, Riana; Rodriguez-Justo, Manuel; Kendall, Catherine; Stone, Nicolas; Thomas, Geraint; Reece, David

    2016-03-01

    Despite the demonstrated potential as an accurate cancer diagnostic tool, Raman spectroscopy (RS) is yet to be adopted by the clinic for histopathology reviews. The Stratified Medicine through Advanced Raman Technologies (SMART) consortium has begun to address some of the hurdles in its adoption for cancer diagnosis. These hurdles include awareness and acceptance of the technology, practicality of integration into the histopathology workflow, data reproducibility and availability of transferrable models. We have formed a consortium, in joint efforts, to develop optimised protocols for tissue sample preparation, data collection and analysis. These protocols will be supported by provision of suitable hardware and software tools to allow statistically sound classification models to be built and transferred for use on different systems. In addition, we are building a validated gastrointestinal (GI) cancers model, which can be trialled as part of the histopathology workflow at hospitals, and a classification tool. At the end of the project, we aim to deliver a robust Raman based diagnostic platform to enable clinical researchers to stage cancer, define tumour margin, build cancer diagnostic models and discover novel disease bio markers.

  18. Quantifying and Characterizing Tonic Thermal Pain Across Subjects From EEG Data Using Random Forest Models.

    PubMed

    Vijayakumar, Vishal; Case, Michelle; Shirinpour, Sina; He, Bin

    2017-12-01

    Effective pain assessment and management strategies are needed to better manage pain. In addition to self-report, an objective pain assessment system can provide a more complete picture of the neurophysiological basis for pain. In this study, a robust and accurate machine learning approach is developed to quantify tonic thermal pain across healthy subjects into a maximum of ten distinct classes. A random forest model was trained to predict pain scores using time-frequency wavelet representations of independent components obtained from electroencephalography (EEG) data, and the relative importance of each frequency band to pain quantification is assessed. The mean classification accuracy for predicting pain on an independent test subject for a range of 1-10 is 89.45%, highest among existing state of the art quantification algorithms for EEG. The gamma band is the most important to both intersubject and intrasubject classification accuracy. The robustness and generalizability of the classifier are demonstrated. Our results demonstrate the potential of this tool to be used clinically to help us to improve chronic pain treatment and establish spectral biomarkers for future pain-related studies using EEG.

  19. An Anomalous Noise Events Detector for Dynamic Road Traffic Noise Mapping in Real-Life Urban and Suburban Environments

    PubMed Central

    2017-01-01

    One of the main aspects affecting the quality of life of people living in urban and suburban areas is their continued exposure to high Road Traffic Noise (RTN) levels. Until now, noise measurements in cities have been performed by professionals, recording data in certain locations to build a noise map afterwards. However, the deployment of Wireless Acoustic Sensor Networks (WASN) has enabled automatic noise mapping in smart cities. In order to obtain a reliable picture of the RTN levels affecting citizens, Anomalous Noise Events (ANE) unrelated to road traffic should be removed from the noise map computation. To this aim, this paper introduces an Anomalous Noise Event Detector (ANED) designed to differentiate between RTN and ANE in real time within a predefined interval running on the distributed low-cost acoustic sensors of a WASN. The proposed ANED follows a two-class audio event detection and classification approach, instead of multi-class or one-class classification schemes, taking advantage of the collection of representative acoustic data in real-life environments. The experiments conducted within the DYNAMAP project, implemented on ARM-based acoustic sensors, show the feasibility of the proposal both in terms of computational cost and classification performance using standard Mel cepstral coefficients and Gaussian Mixture Models (GMM). The two-class GMM core classifier relatively improves the baseline universal GMM one-class classifier F1 measure by 18.7% and 31.8% for suburban and urban environments, respectively, within the 1-s integration interval. Nevertheless, according to the results, the classification performance of the current ANED implementation still has room for improvement. PMID:29023397

  20. Organizing symmetry-protected topological phases by layering and symmetry reduction: A minimalist perspective

    NASA Astrophysics Data System (ADS)

    Xiong, Charles Zhaoxi; Alexandradinata, A.

    2018-03-01

    It is demonstrated that fermionic/bosonic symmetry-protected topological (SPT) phases across different dimensions and symmetry classes can be organized using geometric constructions that increase dimensions and symmetry-reduction maps that change symmetry groups. Specifically, it is shown that the interacting classifications of SPT phases with and without glide symmetry fit into a short exact sequence, so that the classification with glide is constrained to be a direct sum of cyclic groups of order 2 or 4. Applied to fermionic SPT phases in the Wigner-Dyson class AII, this implies that the complete interacting classification in the presence of glide is Z4⊕Z2⊕Z2 in three dimensions. In particular, the hourglass-fermion phase recently realized in the band insulator KHgSb must be robust to interactions. Generalizations to spatiotemporal glide symmetries are discussed.

  1. Sensitivity analysis of the FEMA HAZUS-MH MR4 Earthquake Model using seismic events affecting King County Washington

    NASA Astrophysics Data System (ADS)

    Neighbors, C.; Noriega, G. R.; Caras, Y.; Cochran, E. S.

    2010-12-01

    HAZUS-MH MR4 (HAZards U. S. Multi-Hazard Maintenance Release 4) is a risk-estimation software developed by FEMA to calculate potential losses due to natural disasters. Federal, state, regional, and local government use the HAZUS-MH Earthquake Model for earthquake risk mitigation, preparedness, response, and recovery planning (FEMA, 2003). In this study, we examine several parameters used by the HAZUS-MH Earthquake Model methodology to understand how modifying the user-defined settings affect ground motion analysis, seismic risk assessment and earthquake loss estimates. This analysis focuses on both shallow crustal and deep intraslab events in the American Pacific Northwest. Specifically, the historic 1949 Mw 6.8 Olympia, 1965 Mw 6.6 Seattle-Tacoma and 2001 Mw 6.8 Nisqually normal fault intraslab events and scenario large-magnitude Seattle reverse fault crustal events are modeled. Inputs analyzed include variations of deterministic event scenarios combined with hazard maps and USGS ShakeMaps. This approach utilizes the capacity of the HAZUS-MH Earthquake Model to define landslide- and liquefaction- susceptibility hazards with local groundwater level and slope stability information. Where Shakemap inputs are not used, events are run in combination with NEHRP soil classifications to determine site amplification effects. The earthquake component of HAZUS-MH applies a series of empirical ground motion attenuation relationships developed from source parameters of both regional and global historical earthquakes to estimate strong ground motion. Ground motion and resulting ground failure due to earthquakes are then used to calculate, direct physical damage for general building stock, essential facilities, and lifelines, including transportation systems and utility systems. Earthquake losses are expressed in structural, economic and social terms. Where available, comparisons between recorded earthquake losses and HAZUS-MH earthquake losses are used to determine how region coordinators can most effectively utilize their resources for earthquake risk mitigation. This study is being conducted in collaboration with King County, WA officials to determine the best model inputs necessary to generate robust HAZUS-MH models for the Pacific Northwest.

  2. Event classification and optimization methods using artificial intelligence and other relevant techniques: Sharing the experiences

    NASA Astrophysics Data System (ADS)

    Mohamed, Abdul Aziz; Hasan, Abu Bakar; Ghazali, Abu Bakar Mhd.

    2017-01-01

    Classification of large data into respected classes or groups could be carried out with the help of artificial intelligence (AI) tools readily available in the market. To get the optimum or best results, optimization tool could be applied on those data. Classification and optimization have been used by researchers throughout their works, and the outcomes were very encouraging indeed. Here, the authors are trying to share what they have experienced in three different areas of applied research.

  3. Classification of passive auditory event-related potentials using discriminant analysis and self-organizing feature maps.

    PubMed

    Schönweiler, R; Wübbelt, P; Tolloczko, R; Rose, C; Ptok, M

    2000-01-01

    Discriminant analysis (DA) and self-organizing feature maps (SOFM) were used to classify passively evoked auditory event-related potentials (ERP) P(1), N(1), P(2) and N(2). Responses from 16 children with severe behavioral auditory perception deficits, 16 children with marked behavioral auditory perception deficits, and 14 controls were examined. Eighteen ERP amplitude parameters were selected for examination of statistical differences between the groups. Different DA methods and SOFM configurations were trained to the values. SOFM had better classification results than DA methods. Subsequently, measures on another 37 subjects that were unknown for the trained SOFM were used to test the reliability of the system. With 10-dimensional vectors, reliable classifications were obtained that matched behavioral auditory perception deficits in 96%, implying central auditory processing disorder (CAPD). The results also support the assumption that CAPD includes a 'non-peripheral' auditory processing deficit. Copyright 2000 S. Karger AG, Basel.

  4. Extreme weather events in southern Germany - Climatological risk and development of a large-scale identification procedure

    NASA Astrophysics Data System (ADS)

    Matthies, A.; Leckebusch, G. C.; Rohlfing, G.; Ulbrich, U.

    2009-04-01

    Extreme weather events such as thunderstorms, hail and heavy rain or snowfall can pose a threat to human life and to considerable tangible assets. Yet there is a lack of knowledge about present day climatological risk and its economic effects, and its changes due to rising greenhouse gas concentrations. Therefore, parts of economy particularly sensitve to extreme weather events such as insurance companies and airports require regional risk-analyses, early warning and prediction systems to cope with such events. Such an attempt is made for southern Germany, in close cooperation with stakeholders. Comparing ERA40 and station data with impact records of Munich Re and Munich Airport, the 90th percentile was found to be a suitable threshold for extreme impact relevant precipitation events. Different methods for the classification of causing synoptic situations have been tested on ERA40 reanalyses. An objective scheme for the classification of Lamb's circulation weather types (CWT's) has proved to be most suitable for correct classification of the large-scale flow conditions. Certain CWT's have been turned out to be prone to heavy precipitation or on the other side to have a very low risk of such events. Other large-scale parameters are tested in connection with CWT's to find out a combination that has the highest skill to identify extreme precipitation events in climate model data (ECHAM5 and CLM). For example vorticity advection in 700 hPa shows good results, but assumes knowledge of regional orographic particularities. Therefore ongoing work is focused on additional testing of parameters that indicate deviations of a basic state of the atmosphere like the Eady Growth Rate or the newly developed Dynamic State Index. Evaluation results will be used to estimate the skill of the regional climate model CLM concerning the simulation of frequency and intensity of the extreme weather events. Data of the A1B scenario (2000-2050) will be examined for a possible climate change signal.

  5. Real-Time Fault Classification for Plasma Processes

    PubMed Central

    Yang, Ryan; Chen, Rongshun

    2011-01-01

    Plasma process tools, which usually cost several millions of US dollars, are often used in the semiconductor fabrication etching process. If the plasma process is halted due to some process fault, the productivity will be reduced and the cost will increase. In order to maximize the product/wafer yield and tool productivity, a timely and effective fault process detection is required in a plasma reactor. The classification of fault events can help the users to quickly identify fault processes, and thus can save downtime of the plasma tool. In this work, optical emission spectroscopy (OES) is employed as the metrology sensor for in-situ process monitoring. Splitting into twelve different match rates by spectrum bands, the matching rate indicator in our previous work (Yang, R.; Chen, R.S. Sensors 2010, 10, 5703–5723) is used to detect the fault process. Based on the match data, a real-time classification of plasma faults is achieved by a novel method, developed in this study. Experiments were conducted to validate the novel fault classification. From the experimental results, we may conclude that the proposed method is feasible inasmuch that the overall accuracy rate of the classification for fault event shifts is 27 out of 28 or about 96.4% in success. PMID:22164001

  6. Improving the discrimination of hand motor imagery via virtual reality based visual guidance.

    PubMed

    Liang, Shuang; Choi, Kup-Sze; Qin, Jing; Pang, Wai-Man; Wang, Qiong; Heng, Pheng-Ann

    2016-08-01

    While research on the brain-computer interface (BCI) has been active in recent years, how to get high-quality electrical brain signals to accurately recognize human intentions for reliable communication and interaction is still a challenging task. The evidence has shown that visually guided motor imagery (MI) can modulate sensorimotor electroencephalographic (EEG) rhythms in humans, but how to design and implement efficient visual guidance during MI in order to produce better event-related desynchronization (ERD) patterns is still unclear. The aim of this paper is to investigate the effect of using object-oriented movements in a virtual environment as visual guidance on the modulation of sensorimotor EEG rhythms generated by hand MI. To improve the classification accuracy on MI, we further propose an algorithm to automatically extract subject-specific optimal frequency and time bands for the discrimination of ERD patterns produced by left and right hand MI. The experimental results show that the average classification accuracy of object-directed scenarios is much better than that of non-object-directed scenarios (76.87% vs. 69.66%). The result of the t-test measuring the difference between them is statistically significant (p = 0.0207). When compared to algorithms based on fixed frequency and time bands, contralateral dominant ERD patterns can be enhanced by using the subject-specific optimal frequency and the time bands obtained by our proposed algorithm. These findings have the potential to improve the efficacy and robustness of MI-based BCI applications. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. How should children with speech sound disorders be classified? A review and critical evaluation of current classification systems.

    PubMed

    Waring, R; Knight, R

    2013-01-01

    Children with speech sound disorders (SSD) form a heterogeneous group who differ in terms of the severity of their condition, underlying cause, speech errors, involvement of other aspects of the linguistic system and treatment response. To date there is no universal and agreed-upon classification system. Instead, a number of theoretically differing classification systems have been proposed based on either an aetiological (medical) approach, a descriptive-linguistic approach or a processing approach. To describe and review the supporting evidence, and to provide a critical evaluation of the current childhood SSD classification systems. Descriptions of the major specific approaches to classification are reviewed and research papers supporting the reliability and validity of the systems are evaluated. Three specific paediatric SSD classification systems; the aetiologic-based Speech Disorders Classification System, the descriptive-linguistic Differential Diagnosis system, and the processing-based Psycholinguistic Framework are identified as potentially useful in classifying children with SSD into homogeneous subgroups. The Differential Diagnosis system has a growing body of empirical support from clinical population studies, across language error pattern studies and treatment efficacy studies. The Speech Disorders Classification System is currently a research tool with eight proposed subgroups. The Psycholinguistic Framework is a potential bridge to linking cause and surface level speech errors. There is a need for a universally agreed-upon classification system that is useful to clinicians and researchers. The resulting classification system needs to be robust, reliable and valid. A universal classification system would allow for improved tailoring of treatments to subgroups of SSD which may, in turn, lead to improved treatment efficacy. © 2012 Royal College of Speech and Language Therapists.

  8. A systematic review of the Robson classification for caesarean section: what works, doesn't work and how to improve it.

    PubMed

    Betrán, Ana Pilar; Vindevoghel, Nadia; Souza, Joao Paulo; Gülmezoglu, A Metin; Torloni, Maria Regina

    2014-01-01

    Caesarean sections (CS) rates continue to increase worldwide without a clear understanding of the main drivers and consequences. The lack of a standardized internationally-accepted classification system to monitor and compare CS rates is one of the barriers to a better understanding of this trend. The Robson's 10-group classification is based on simple obstetrical parameters (parity, previous CS, gestational age, onset of labour, fetal presentation and number of fetuses) and does not involve the indication for CS. This classification has become very popular over the last years in many countries. We conducted a systematic review to synthesize the experience of users on the implementation of this classification and proposed adaptations. Four electronic databases were searched. A three-step thematic synthesis approach and a qualitative metasummary method were used. 232 unique reports were identified, 97 were selected for full-text evaluation and 73 were included. These publications reported on the use of Robson's classification in over 33 million women from 31 countries. According to users, the main strengths of the classification are its simplicity, robustness, reliability and flexibility. However, missing data, misclassification of women and lack of definition or consensus on core variables of the classification are challenges. To improve the classification for local use and to decrease heterogeneity within groups, several subdivisions in each of the 10 groups have been proposed. Group 5 (women with previous CS) received the largest number of suggestions. The use of the Robson classification is increasing rapidly and spontaneously worldwide. Despite some limitations, this classification is easy to implement and interpret. Several suggested modifications could be useful to help facilities and countries as they work towards its implementation.

  9. A Systematic Review of the Robson Classification for Caesarean Section: What Works, Doesn't Work and How to Improve It

    PubMed Central

    Betrán, Ana Pilar; Vindevoghel, Nadia; Souza, Joao Paulo; Gülmezoglu, A. Metin; Torloni, Maria Regina

    2014-01-01

    Background Caesarean sections (CS) rates continue to increase worldwide without a clear understanding of the main drivers and consequences. The lack of a standardized internationally-accepted classification system to monitor and compare CS rates is one of the barriers to a better understanding of this trend. The Robson's 10-group classification is based on simple obstetrical parameters (parity, previous CS, gestational age, onset of labour, fetal presentation and number of fetuses) and does not involve the indication for CS. This classification has become very popular over the last years in many countries. We conducted a systematic review to synthesize the experience of users on the implementation of this classification and proposed adaptations. Methods Four electronic databases were searched. A three-step thematic synthesis approach and a qualitative metasummary method were used. Results 232 unique reports were identified, 97 were selected for full-text evaluation and 73 were included. These publications reported on the use of Robson's classification in over 33 million women from 31 countries. According to users, the main strengths of the classification are its simplicity, robustness, reliability and flexibility. However, missing data, misclassification of women and lack of definition or consensus on core variables of the classification are challenges. To improve the classification for local use and to decrease heterogeneity within groups, several subdivisions in each of the 10 groups have been proposed. Group 5 (women with previous CS) received the largest number of suggestions. Conclusions The use of the Robson classification is increasing rapidly and spontaneously worldwide. Despite some limitations, this classification is easy to implement and interpret. Several suggested modifications could be useful to help facilities and countries as they work towards its implementation. PMID:24892928

  10. Clinicopathological analysis of biopsy-proven diabetic nephropathy based on the Japanese classification of diabetic nephropathy.

    PubMed

    Furuichi, Kengo; Shimizu, Miho; Yuzawa, Yukio; Hara, Akinori; Toyama, Tadashi; Kitamura, Hiroshi; Suzuki, Yoshiki; Sato, Hiroshi; Uesugi, Noriko; Ubara, Yoshifumi; Hohino, Junichi; Hisano, Satoshi; Ueda, Yoshihiko; Nishi, Shinichi; Yokoyama, Hitoshi; Nishino, Tomoya; Kohagura, Kentaro; Ogawa, Daisuke; Mise, Koki; Shibagaki, Yugo; Makino, Hirofumi; Matsuo, Seiichi; Wada, Takashi

    2018-06-01

    The Japanese classification of diabetic nephropathy reflects the risks of mortality, cardiovascular events and kidney prognosis and is clinically useful. Furthermore, pathological findings of diabetic nephropathy are useful for predicting prognoses. In this study, we evaluated the characteristics of pathological findings in relation to the Japanese classification of diabetic nephropathy and their ability to predict prognosis. The clinical data of 600 biopsy-confirmed diabetic nephropathy patients were collected retrospectively from 13 centers across Japan. Composite kidney events, kidney death, cardiovascular events, all-cause mortality, and decreasing rate of estimated GFR (eGFR) were evaluated based on the Japanese classification of diabetic nephropathy. The median observation period was 70.4 (IQR 20.9-101.0) months. Each stage had specific characteristic pathological findings. Diffuse lesions, interstitial fibrosis and/or tubular atrophy (IFTA), interstitial cell infiltration, arteriolar hyalinosis, and intimal thickening were detected in more than half the cases, even in Stage 1. An analysis of the impacts on outcomes in all data showed that hazard ratios of diffuse lesions, widening of the subendothelial space, exudative lesions, mesangiolysis, IFTA, and interstitial cell infiltration were 2.7, 2.8, 2.7, 2.6, 3.5, and 3.7, respectively. Median declining speed of eGFR in all cases was 5.61 mL/min/1.73 m 2 /year, and the median rate of declining kidney function within 2 years after kidney biopsy was 24.0%. This study indicated that pathological findings could categorize the high-risk group as well as the Japanese classification of diabetic nephropathy. Further study using biopsy specimens is required to clarify the pathogenesis of diabetic kidney disease.

  11. Event and Apparent Horizon Finders for 3 + 1 Numerical Relativity.

    PubMed

    Thornburg, Jonathan

    2007-01-01

    Event and apparent horizons are key diagnostics for the presence and properties of black holes. In this article I review numerical algorithms and codes for finding event and apparent horizons in numerically-computed spacetimes, focusing on calculations done using the 3 + 1 ADM formalism. The event horizon of an asymptotically-flat spacetime is the boundary between those events from which a future-pointing null geodesic can reach future null infinity and those events from which no such geodesic exists. The event horizon is a (continuous) null surface in spacetime. The event horizon is defined nonlocally in time : it is a global property of the entire spacetime and must be found in a separate post-processing phase after all (or at least the nonstationary part) of spacetime has been numerically computed. There are three basic algorithms for finding event horizons, based on integrating null geodesics forwards in time, integrating null geodesics backwards in time, and integrating null surfaces backwards in time. The last of these is generally the most efficient and accurate. In contrast to an event horizon, an apparent horizon is defined locally in time in a spacelike slice and depends only on data in that slice, so it can be (and usually is) found during the numerical computation of a spacetime. A marginally outer trapped surface (MOTS) in a slice is a smooth closed 2-surface whose future-pointing outgoing null geodesics have zero expansion Θ. An apparent horizon is then defined as a MOTS not contained in any other MOTS. The MOTS condition is a nonlinear elliptic partial differential equation (PDE) for the surface shape, containing the ADM 3-metric, its spatial derivatives, and the extrinsic curvature as coefficients. Most "apparent horizon" finders actually find MOTSs. There are a large number of apparent horizon finding algorithms, with differing trade-offs between speed, robustness, accuracy, and ease of programming. In axisymmetry, shooting algorithms work well and are fairly easy to program. In slices with no continuous symmetries, spectral integral-iteration algorithms and elliptic-PDE algorithms are fast and accurate, but require good initial guesses to converge. In many cases, Schnetter's "pretracking" algorithm can greatly improve an elliptic-PDE algorithm's robustness. Flow algorithms are generally quite slow but can be very robust in their convergence. Minimization methods are slow and relatively inaccurate in the context of a finite differencing simulation, but in a spectral code they can be relatively faster and more robust.

  12. Classification of proteins with shared motifs and internal repeats in the ECOD database

    PubMed Central

    Kinch, Lisa N.; Liao, Yuxing

    2016-01-01

    Abstract Proteins and their domains evolve by a set of events commonly including the duplication and divergence of small motifs. The presence of short repetitive regions in domains has generally constituted a difficult case for structural domain classifications and their hierarchies. We developed the Evolutionary Classification Of protein Domains (ECOD) in part to implement a new schema for the classification of these types of proteins. Here we document the ways in which ECOD classifies proteins with small internal repeats, widespread functional motifs, and assemblies of small domain‐like fragments in its evolutionary schema. We illustrate the ways in which the structural genomics project impacted the classification and characterization of new structural domains and sequence families over the decade. PMID:26833690

  13. Potential of turbidity monitoring for real time control of pollutant discharge in sewers during rainfall events.

    PubMed

    Lacour, C; Joannis, C; Gromaire, M-C; Chebbo, G

    2009-01-01

    Turbidity sensors can be used to continuously monitor the evolution of pollutant mass discharge. For two sites within the Paris combined sewer system, continuous turbidity, conductivity and flow data were recorded at one-minute time intervals over a one-year period. This paper is intended to highlight the variability in turbidity dynamics during wet weather. For each storm event, turbidity response aspects were analysed through different classifications. The correlation between classification and common parameters, such as the antecedent dry weather period, total event volume per impervious hectare and both the mean and maximum hydraulic flow for each event, was also studied. Moreover, the dynamics of flow and turbidity signals were compared at the event scale. No simple relation between turbidity responses, hydraulic flow dynamics and the chosen parameters was derived from this effort. Knowledge of turbidity dynamics could therefore potentially improve wet weather management, especially when using pollution-based real-time control (P-RTC) since turbidity contains information not included in hydraulic flow dynamics and not readily predictable from such dynamics.

  14. Rough Set Soft Computing Cancer Classification and Network: One Stone, Two Birds

    PubMed Central

    Zhang, Yue

    2010-01-01

    Gene expression profiling provides tremendous information to help unravel the complexity of cancer. The selection of the most informative genes from huge noise for cancer classification has taken centre stage, along with predicting the function of such identified genes and the construction of direct gene regulatory networks at different system levels with a tuneable parameter. A new study by Wang and Gotoh described a novel Variable Precision Rough Sets-rooted robust soft computing method to successfully address these problems and has yielded some new insights. The significance of this progress and its perspectives will be discussed in this article. PMID:20706619

  15. A hierarchical classification method for finger knuckle print recognition

    NASA Astrophysics Data System (ADS)

    Kong, Tao; Yang, Gongping; Yang, Lu

    2014-12-01

    Finger knuckle print has recently been seen as an effective biometric technique. In this paper, we propose a hierarchical classification method for finger knuckle print recognition, which is rooted in traditional score-level fusion methods. In the proposed method, we firstly take Gabor feature as the basic feature for finger knuckle print recognition and then a new decision rule is defined based on the predefined threshold. Finally, the minor feature speeded-up robust feature is conducted for these users, who cannot be recognized by the basic feature. Extensive experiments are performed to evaluate the proposed method, and experimental results show that it can achieve a promising performance.

  16. Structure of the human MLH1 N-terminus: implications for predisposition to Lynch syndrome

    DOE PAGES

    Wu, Hong; Zeng, Hong; Lam, Robert; ...

    2015-08-01

    Mismatch repair prevents the accumulation of erroneous insertions/deletions and non-Watson–Crick base pairs in the genome. Pathogenic mutations in theMLH1gene are associated with a predisposition to Lynch and Turcot's syndromes. Although genetic testing for these mutations is available, robust classification of variants requires strong clinical and functional support. Here, the first structure of the N-terminus of human MLH1, determined by X-ray crystallography, is described. Lastly, the structure shares a high degree of similarity with previously determined prokaryoticMLH1homologs; however, this structure affords a more accurate platform for the classification ofMLH1variants.

  17. Advanced Steel Microstructural Classification by Deep Learning Methods.

    PubMed

    Azimi, Seyed Majid; Britz, Dominik; Engstler, Michael; Fritz, Mario; Mücklich, Frank

    2018-02-01

    The inner structure of a material is called microstructure. It stores the genesis of a material and determines all its physical and chemical properties. While microstructural characterization is widely spread and well known, the microstructural classification is mostly done manually by human experts, which gives rise to uncertainties due to subjectivity. Since the microstructure could be a combination of different phases or constituents with complex substructures its automatic classification is very challenging and only a few prior studies exist. Prior works focused on designed and engineered features by experts and classified microstructures separately from the feature extraction step. Recently, Deep Learning methods have shown strong performance in vision applications by learning the features from data together with the classification step. In this work, we propose a Deep Learning method for microstructural classification in the examples of certain microstructural constituents of low carbon steel. This novel method employs pixel-wise segmentation via Fully Convolutional Neural Network (FCNN) accompanied by a max-voting scheme. Our system achieves 93.94% classification accuracy, drastically outperforming the state-of-the-art method of 48.89% accuracy. Beyond the strong performance of our method, this line of research offers a more robust and first of all objective way for the difficult task of steel quality appreciation.

  18. The morphology and classification of α ganglion cells in the rat retinae: a fractal analysis study.

    PubMed

    Jelinek, Herbert F; Ristanović, Dušan; Milošević, Nebojša T

    2011-09-30

    Rat retinal ganglion cells have been proposed to consist of a varying number of subtypes. Dendritic morphology is an essential aspect of classification and a necessary step toward understanding structure-function relationships of retinal ganglion cells. This study aimed at using a heuristic classification procedure in combination with the box-counting analysis to classify the alpha ganglion cells in the rat retinae based on the dendritic branching pattern and to investigate morphological changes with retinal eccentricity. The cells could be divided into two groups: cells with simple dendritic pattern (box dimension lower than 1.390) and cells with complex dendritic pattern (box dimension higher than 1.390) according to their dendritic branching pattern complexity. Both were further divided into two subtypes due to the stratification within the inner plexiform layer. In the present study we have shown that the alpha rat RCGs can be classified further by their dendritic branching complexity and thus extend those of previous reports that fractal analysis can be successfully used in neuronal classification, particularly that the fractal dimension represents a robust and sensitive tool for the classification of retinal ganglion cells. A hypothesis of possible functional significance of our classification scheme is also discussed. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Integrated feature extraction and selection for neuroimage classification

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Shen, Dinggang

    2009-02-01

    Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.

  20. A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose

    PubMed Central

    Rahman, Mohammad Mizanur; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse

    2017-01-01

    Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k-nearest neighbor (k-NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms. PMID:28895910

  1. A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose.

    PubMed

    Rahman, Mohammad Mizanur; Charoenlarpnopparut, Chalie; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse

    2017-09-12

    Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k -nearest neighbor ( k -NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms.

  2. Classification of Strawberry Fruit Shape by Machine Learning

    NASA Astrophysics Data System (ADS)

    Ishikawa, T.; Hayashi, A.; Nagamatsu, S.; Kyutoku, Y.; Dan, I.; Wada, T.; Oku, K.; Saeki, Y.; Uto, T.; Tanabata, T.; Isobe, S.; Kochi, N.

    2018-05-01

    Shape is one of the most important traits of agricultural products due to its relationships with the quality, quantity, and value of the products. For strawberries, the nine types of fruit shape were defined and classified by humans based on the sampler patterns of the nine types. In this study, we tested the classification of strawberry shapes by machine learning in order to increase the accuracy of the classification, and we introduce the concept of computerization into this field. Four types of descriptors were extracted from the digital images of strawberries: (1) the Measured Values (MVs) including the length of the contour line, the area, the fruit length and width, and the fruit width/length ratio; (2) the Ellipse Similarity Index (ESI); (3) Elliptic Fourier Descriptors (EFDs), and (4) Chain Code Subtraction (CCS). We used these descriptors for the classification test along with the random forest approach, and eight of the nine shape types were classified with combinations of MVs + CCS + EFDs. CCS is a descriptor that adds human knowledge to the chain codes, and it showed higher robustness in classification than the other descriptors. Our results suggest machine learning's high ability to classify fruit shapes accurately. We will attempt to increase the classification accuracy and apply the machine learning methods to other plant species.

  3. Towards affordable biomarkers of frontotemporal dementia: A classification study via network's information sharing.

    PubMed

    Dottori, Martin; Sedeño, Lucas; Martorell Caro, Miguel; Alifano, Florencia; Hesse, Eugenia; Mikulan, Ezequiel; García, Adolfo M; Ruiz-Tagle, Amparo; Lillo, Patricia; Slachevsky, Andrea; Serrano, Cecilia; Fraiman, Daniel; Ibanez, Agustin

    2017-06-19

    Developing effective and affordable biomarkers for dementias is critical given the difficulty to achieve early diagnosis. In this sense, electroencephalographic (EEG) methods offer promising alternatives due to their low cost, portability, and growing robustness. Here, we relied on EEG signals and a novel information-sharing method to study resting-state connectivity in patients with behavioral variant frontotemporal dementia (bvFTD) and controls. To evaluate the specificity of our results, we also tested Alzheimer's disease (AD) patients. The classification power of the ensuing connectivity patterns was evaluated through a supervised classification algorithm (support vector machine). In addition, we compared the classification power yielded by (i) functional connectivity, (ii) relevant neuropsychological tests, and (iii) a combination of both. BvFTD patients exhibited a specific pattern of hypoconnectivity in mid-range frontotemporal links, which showed no alterations in AD patients. These functional connectivity alterations in bvFTD were replicated with a low-density EEG setting (20 electrodes). Moreover, while neuropsychological tests yielded acceptable discrimination between bvFTD and controls, the addition of connectivity results improved classification power. Finally, classification between bvFTD and AD patients was better when based on connectivity than on neuropsychological measures. Taken together, such findings underscore the relevance of EEG measures as potential biomarker signatures for clinical settings.

  4. Quantum theory as plausible reasoning applied to data obtained by robust experiments.

    PubMed

    De Raedt, H; Katsnelson, M I; Michielsen, K

    2016-05-28

    We review recent work that employs the framework of logical inference to establish a bridge between data gathered through experiments and their objective description in terms of human-made concepts. It is shown that logical inference applied to experiments for which the observed events are independent and for which the frequency distribution of these events is robust with respect to small changes of the conditions under which the experiments are carried out yields, without introducing any concept of quantum theory, the quantum theoretical description in terms of the Schrödinger or the Pauli equation, the Stern-Gerlach or Einstein-Podolsky-Rosen-Bohm experiments. The extraordinary descriptive power of quantum theory then follows from the fact that it is plausible reasoning, that is common sense, applied to reproducible and robust experimental data. © 2016 The Author(s).

  5. Composite Biomarkers Derived from Micro-Electrode Array Measurements and Computer Simulations Improve the Classification of Drug-Induced Channel Block.

    PubMed

    Tixier, Eliott; Raphel, Fabien; Lombardi, Damiano; Gerbeau, Jean-Frédéric

    2017-01-01

    The Micro-Electrode Array (MEA) device enables high-throughput electrophysiology measurements that are less labor-intensive than patch-clamp based techniques. Combined with human-induced pluripotent stem cells cardiomyocytes (hiPSC-CM), it represents a new and promising paradigm for automated and accurate in vitro drug safety evaluation. In this article, the following question is addressed: which features of the MEA signals should be measured to better classify the effects of drugs? A framework for the classification of drugs using MEA measurements is proposed. The classification is based on the ion channels blockades induced by the drugs. It relies on an in silico electrophysiology model of the MEA, a feature selection algorithm and automatic classification tools. An in silico model of the MEA is developed and is used to generate synthetic measurements. An algorithm that extracts MEA measurements features designed to perform well in a classification context is described. These features are called composite biomarkers. A state-of-the-art machine learning program is used to carry out the classification of drugs using experimental MEA measurements. The experiments are carried out using five different drugs: mexiletine, flecainide, diltiazem, moxifloxacin, and dofetilide. We show that the composite biomarkers outperform the classical ones in different classification scenarios. We show that using both synthetic and experimental MEA measurements improves the robustness of the composite biomarkers and that the classification scores are increased.

  6. Event-Based User Classification in Weibo Media

    PubMed Central

    Wang, Wendong; Cheng, Shiduan; Que, Xirong

    2014-01-01

    Weibo media, known as the real-time microblogging services, has attracted massive attention and support from social network users. Weibo platform offers an opportunity for people to access information and changes the way people acquire and disseminate information significantly. Meanwhile, it enables people to respond to the social events in a more convenient way. Much of the information in Weibo media is related to some events. Users who post different contents, and exert different behavior or attitude may lead to different contribution to the specific event. Therefore, classifying the large amount of uncategorized social circles generated in Weibo media automatically from the perspective of events has been a promising task. Under this circumstance, in order to effectively organize and manage the huge amounts of users, thereby further managing their contents, we address the task of user classification in a more granular, event-based approach in this paper. By analyzing real data collected from Sina Weibo, we investigate the Weibo properties and utilize both content information and social network information to classify the numerous users into four primary groups: celebrities, organizations/media accounts, grassroots stars, and ordinary individuals. The experiments results show that our method identifies the user categories accurately. PMID:25133235

  7. Event-based user classification in Weibo media.

    PubMed

    Guo, Liang; Wang, Wendong; Cheng, Shiduan; Que, Xirong

    2014-01-01

    Weibo media, known as the real-time microblogging services, has attracted massive attention and support from social network users. Weibo platform offers an opportunity for people to access information and changes the way people acquire and disseminate information significantly. Meanwhile, it enables people to respond to the social events in a more convenient way. Much of the information in Weibo media is related to some events. Users who post different contents, and exert different behavior or attitude may lead to different contribution to the specific event. Therefore, classifying the large amount of uncategorized social circles generated in Weibo media automatically from the perspective of events has been a promising task. Under this circumstance, in order to effectively organize and manage the huge amounts of users, thereby further managing their contents, we address the task of user classification in a more granular, event-based approach in this paper. By analyzing real data collected from Sina Weibo, we investigate the Weibo properties and utilize both content information and social network information to classify the numerous users into four primary groups: celebrities, organizations/media accounts, grassroots stars, and ordinary individuals. The experiments results show that our method identifies the user categories accurately.

  8. myBlackBox: Blackbox Mobile Cloud Systems for Personalized Unusual Event Detection.

    PubMed

    Ahn, Junho; Han, Richard

    2016-05-23

    We demonstrate the feasibility of constructing a novel and practical real-world mobile cloud system, called myBlackBox, that efficiently fuses multimodal smartphone sensor data to identify and log unusual personal events in mobile users' daily lives. The system incorporates a hybrid architectural design that combines unsupervised classification of audio, accelerometer and location data with supervised joint fusion classification to achieve high accuracy, customization, convenience and scalability. We show the feasibility of myBlackBox by implementing and evaluating this end-to-end system that combines Android smartphones with cloud servers, deployed for 15 users over a one-month period.

  9. myBlackBox: Blackbox Mobile Cloud Systems for Personalized Unusual Event Detection

    PubMed Central

    Ahn, Junho; Han, Richard

    2016-01-01

    We demonstrate the feasibility of constructing a novel and practical real-world mobile cloud system, called myBlackBox, that efficiently fuses multimodal smartphone sensor data to identify and log unusual personal events in mobile users’ daily lives. The system incorporates a hybrid architectural design that combines unsupervised classification of audio, accelerometer and location data with supervised joint fusion classification to achieve high accuracy, customization, convenience and scalability. We show the feasibility of myBlackBox by implementing and evaluating this end-to-end system that combines Android smartphones with cloud servers, deployed for 15 users over a one-month period. PMID:27223292

  10. Using Ontologies for the Online Recognition of Activities of Daily Living†

    PubMed Central

    2018-01-01

    The recognition of activities of daily living is an important research area of interest in recent years. The process of activity recognition aims to recognize the actions of one or more people in a smart environment, in which a set of sensors has been deployed. Usually, all the events produced during each activity are taken into account to develop the classification models. However, the instant in which an activity started is unknown in a real environment. Therefore, only the most recent events are usually used. In this paper, we use statistics to determine the most appropriate length of that interval for each type of activity. In addition, we use ontologies to automatically generate features that serve as the input for the supervised learning algorithms that produce the classification model. The features are formed by combining the entities in the ontology, such as concepts and properties. The results obtained show a significant increase in the accuracy of the classification models generated with respect to the classical approach, in which only the state of the sensors is taken into account. Moreover, the results obtained in a simulation of a real environment under an event-based segmentation also show an improvement in most activities. PMID:29662011

  11. 3 CFR 13526 - Executive Order 13526 of December 29, 2009. Classified National Security Information

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... of weapons of mass destruction. Sec. 1.5. Duration of Classification. (a) At the time of original... intelligence source or key design concepts of weapons of mass destruction, the date or event shall not exceed the time frame established in paragraph (b) of this section. (b) If the original classification...

  12. Effects of Race and Precipitating Event on Suicide versus Nonsuicide Death Classification in a College Sample

    ERIC Educational Resources Information Center

    Walker, Rheeda L.; Flowers, Kelci C.

    2011-01-01

    Race group differences in suicide death classification in a sample of 109 Black and White university students were examined. Participants were randomly assigned to read three vignettes for which the vignette subjects' race (only) varied. The vignettes each described a circumstance (terminal illness, academic failure, or relationship difficulties)…

  13. Land cover classification in multispectral imagery using clustering of sparse approximations over learned feature dictionaries

    DOE PAGES

    Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...

    2014-12-09

    We present results from an ongoing effort to extend neuromimetic machine vision algorithms to multispectral data using adaptive signal processing combined with compressive sensing and machine learning techniques. Our goal is to develop a robust classification methodology that will allow for automated discretization of the landscape into distinct units based on attributes such as vegetation, surface hydrological properties, and topographic/geomorphic characteristics. We use a Hebbian learning rule to build spectral-textural dictionaries that are tailored for classification. We learn our dictionaries from millions of overlapping multispectral image patches and then use a pursuit search to generate classification features. Land cover labelsmore » are automatically generated using unsupervised clustering of sparse approximations (CoSA). We demonstrate our method on multispectral WorldView-2 data from a coastal plain ecosystem in Barrow, Alaska. We explore learning from both raw multispectral imagery and normalized band difference indices. We explore a quantitative metric to evaluate the spectral properties of the clusters in order to potentially aid in assigning land cover categories to the cluster labels. In this study, our results suggest CoSA is a promising approach to unsupervised land cover classification in high-resolution satellite imagery.« less

  14. Learning accurate very fast decision trees from uncertain data streams

    NASA Astrophysics Data System (ADS)

    Liang, Chunquan; Zhang, Yang; Shi, Peng; Hu, Zhengguo

    2015-12-01

    Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.

  15. Pattern classification using an olfactory model with PCA feature selection in electronic noses: study and application.

    PubMed

    Fu, Jun; Huang, Canqin; Xing, Jianguo; Zheng, Junbao

    2012-01-01

    Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor) as well as its parallel channels (inner factor). The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.

  16. Justification of Fuzzy Declustering Vector Quantization Modeling in Classification of Genotype-Image Phenotypes

    NASA Astrophysics Data System (ADS)

    Ng, Theam Foo; Pham, Tuan D.; Zhou, Xiaobo

    2010-01-01

    With the fast development of multi-dimensional data compression and pattern classification techniques, vector quantization (VQ) has become a system that allows large reduction of data storage and computational effort. One of the most recent VQ techniques that handle the poor estimation of vector centroids due to biased data from undersampling is to use fuzzy declustering-based vector quantization (FDVQ) technique. Therefore, in this paper, we are motivated to propose a justification of FDVQ based hidden Markov model (HMM) for investigating its effectiveness and efficiency in classification of genotype-image phenotypes. The performance evaluation and comparison of the recognition accuracy between a proposed FDVQ based HMM (FDVQ-HMM) and a well-known LBG (Linde, Buzo, Gray) vector quantization based HMM (LBG-HMM) will be carried out. The experimental results show that the performances of both FDVQ-HMM and LBG-HMM are almost similar. Finally, we have justified the competitiveness of FDVQ-HMM in classification of cellular phenotype image database by using hypotheses t-test. As a result, we have validated that the FDVQ algorithm is a robust and an efficient classification technique in the application of RNAi genome-wide screening image data.

  17. Classification of neocortical interneurons using affinity propagation.

    PubMed

    Santana, Roberto; McGarry, Laura M; Bielza, Concha; Larrañaga, Pedro; Yuste, Rafael

    2013-01-01

    In spite of over a century of research on cortical circuits, it is still unknown how many classes of cortical neurons exist. In fact, neuronal classification is a difficult problem because it is unclear how to designate a neuronal cell class and what are the best characteristics to define them. Recently, unsupervised classifications using cluster analysis based on morphological, physiological, or molecular characteristics, have provided quantitative and unbiased identification of distinct neuronal subtypes, when applied to selected datasets. However, better and more robust classification methods are needed for increasingly complex and larger datasets. Here, we explored the use of affinity propagation, a recently developed unsupervised classification algorithm imported from machine learning, which gives a representative example or exemplar for each cluster. As a case study, we applied affinity propagation to a test dataset of 337 interneurons belonging to four subtypes, previously identified based on morphological and physiological characteristics. We found that affinity propagation correctly classified most of the neurons in a blind, non-supervised manner. Affinity propagation outperformed Ward's method, a current standard clustering approach, in classifying the neurons into 4 subtypes. Affinity propagation could therefore be used in future studies to validly classify neurons, as a first step to help reverse engineer neural circuits.

  18. Activity classification based on inertial and barometric pressure sensors at different anatomical locations.

    PubMed

    Moncada-Torres, A; Leuenberger, K; Gonzenbach, R; Luft, A; Gassert, R

    2014-07-01

    Miniature, wearable sensor modules are a promising technology to monitor activities of daily living (ADL) over extended periods of time. To assure both user compliance and meaningful results, the selection and placement site of sensors requires careful consideration. We investigated these aspects for the classification of 16 ADL in 6 healthy subjects under laboratory conditions using ReSense, our custom-made inertial measurement unit enhanced with a barometric pressure sensor used to capture activity-related altitude changes. Subjects wore a module on each wrist and ankle, and one on the trunk. Activities comprised whole body movements as well as gross and dextrous upper-limb activities. Wrist-module data outperformed the other locations for the three activity groups. Specifically, overall classification accuracy rates of almost 93% and more than 95% were achieved for the repeated holdout and user-specific validation methods, respectively, for all 16 activities. Including the altitude profile resulted in a considerable improvement of up to 20% in the classification accuracy for stair ascent and descent. The gyroscopes provided no useful information for activity classification under this scheme. The proposed sensor setting could allow for robust long-term activity monitoring with high compliance in different patient populations.

  19. Deep feature extraction and combination for synthetic aperture radar target classification

    NASA Astrophysics Data System (ADS)

    Amrani, Moussa; Jiang, Feng

    2017-10-01

    Feature extraction has always been a difficult problem in the classification performance of synthetic aperture radar automatic target recognition (SAR-ATR). It is very important to select discriminative features to train a classifier, which is a prerequisite. Inspired by the great success of convolutional neural network (CNN), we address the problem of SAR target classification by proposing a feature extraction method, which takes advantage of exploiting the extracted deep features from CNNs on SAR images to introduce more powerful discriminative features and robust representation ability for them. First, the pretrained VGG-S net is fine-tuned on moving and stationary target acquisition and recognition (MSTAR) public release database. Second, after a simple preprocessing is performed, the fine-tuned network is used as a fixed feature extractor to extract deep features from the processed SAR images. Third, the extracted deep features are fused by using a traditional concatenation and a discriminant correlation analysis algorithm. Finally, for target classification, K-nearest neighbors algorithm based on LogDet divergence-based metric learning triplet constraints is adopted as a baseline classifier. Experiments on MSTAR are conducted, and the classification accuracy results demonstrate that the proposed method outperforms the state-of-the-art methods.

  20. Evaluating Support for the Current Classification of Eukaryotic Diversity

    PubMed Central

    Parfrey, Laura Wegener; Barbero, Erika; Lasser, Elyse; Dunthorn, Micah; Bhattacharya, Debashish; Patterson, David J; Katz, Laura A

    2006-01-01

    Perspectives on the classification of eukaryotic diversity have changed rapidly in recent years, as the four eukaryotic groups within the five-kingdom classification—plants, animals, fungi, and protists—have been transformed through numerous permutations into the current system of six “supergroups.” The intent of the supergroup classification system is to unite microbial and macroscopic eukaryotes based on phylogenetic inference. This supergroup approach is increasing in popularity in the literature and is appearing in introductory biology textbooks. We evaluate the stability and support for the current six-supergroup classification of eukaryotes based on molecular genealogies. We assess three aspects of each supergroup: (1) the stability of its taxonomy, (2) the support for monophyly (single evolutionary origin) in molecular analyses targeting a supergroup, and (3) the support for monophyly when a supergroup is included as an out-group in phylogenetic studies targeting other taxa. Our analysis demonstrates that supergroup taxonomies are unstable and that support for groups varies tremendously, indicating that the current classification scheme of eukaryotes is likely premature. We highlight several trends contributing to the instability and discuss the requirements for establishing robust clades within the eukaryotic tree of life. PMID:17194223

  1. Object-based land-cover classification for metropolitan Phoenix, Arizona, using aerial photography

    NASA Astrophysics Data System (ADS)

    Li, Xiaoxiao; Myint, Soe W.; Zhang, Yujia; Galletti, Chritopher; Zhang, Xiaoxiang; Turner, Billie L.

    2014-12-01

    Detailed land-cover mapping is essential for a range of research issues addressed by the sustainability and land system sciences and planning. This study uses an object-based approach to create a 1 m land-cover classification map of the expansive Phoenix metropolitan area through the use of high spatial resolution aerial photography from National Agricultural Imagery Program. It employs an expert knowledge decision rule set and incorporates the cadastral GIS vector layer as auxiliary data. The classification rule was established on a hierarchical image object network, and the properties of parcels in the vector layer were used to establish land cover types. Image segmentations were initially utilized to separate the aerial photos into parcel sized objects, and were further used for detailed land type identification within the parcels. Characteristics of image objects from contextual and geometrical aspects were used in the decision rule set to reduce the spectral limitation of the four-band aerial photography. Classification results include 12 land-cover classes and subclasses that may be assessed from the sub-parcel to the landscape scales, facilitating examination of scale dynamics. The proposed object-based classification method provides robust results, uses minimal and readily available ancillary data, and reduces computational time.

  2. Locally Weighted Score Estimation for Quantile Classification in Binary Regression Models

    PubMed Central

    Rice, John D.; Taylor, Jeremy M. G.

    2016-01-01

    One common use of binary response regression methods is classification based on an arbitrary probability threshold dictated by the particular application. Since this is given to us a priori, it is sensible to incorporate the threshold into our estimation procedure. Specifically, for the linear logistic model, we solve a set of locally weighted score equations, using a kernel-like weight function centered at the threshold. The bandwidth for the weight function is selected by cross validation of a novel hybrid loss function that combines classification error and a continuous measure of divergence between observed and fitted values; other possible cross-validation functions based on more common binary classification metrics are also examined. This work has much in common with robust estimation, but diers from previous approaches in this area in its focus on prediction, specifically classification into high- and low-risk groups. Simulation results are given showing the reduction in error rates that can be obtained with this method when compared with maximum likelihood estimation, especially under certain forms of model misspecification. Analysis of a melanoma data set is presented to illustrate the use of the method in practice. PMID:28018492

  3. A Robust Response of Precipitation to Global Warming from CMIP5 Models

    NASA Technical Reports Server (NTRS)

    Lau, K. -M.; Wu, H. -T.; Kim, K. -M.

    2012-01-01

    How precipitation responds to global warming is a major concern to society and a challenge to climate change research. Based on analyses of rainfall probability distribution functions of 14 state-of-the-art climate models, we find a robust, canonical global rainfall response to a triple CO2 warming scenario, featuring 100 250% more heavy rain, 5-10% less moderate rain, and 10-15% more very light or no-rain events. Regionally, a majority of the models project a consistent response with more heavy rain events over climatologically wet regions of the deep tropics, and more dry events over subtropical and tropical land areas. Results suggest that increased CO2 emissions induce basic structural changes in global rain systems, increasing risks of severe floods and droughts in preferred geographic locations worldwide.

  4. SAR-based change detection using hypothesis testing and Markov random field modelling

    NASA Astrophysics Data System (ADS)

    Cao, W.; Martinis, S.

    2015-04-01

    The objective of this study is to automatically detect changed areas caused by natural disasters from bi-temporal co-registered and calibrated TerraSAR-X data. The technique in this paper consists of two steps: Firstly, an automatic coarse detection step is applied based on a statistical hypothesis test for initializing the classification. The original analytical formula as proposed in the constant false alarm rate (CFAR) edge detector is reviewed and rewritten in a compact form of the incomplete beta function, which is a builtin routine in commercial scientific software such as MATLAB and IDL. Secondly, a post-classification step is introduced to optimize the noisy classification result in the previous step. Generally, an optimization problem can be formulated as a Markov random field (MRF) on which the quality of a classification is measured by an energy function. The optimal classification based on the MRF is related to the lowest energy value. Previous studies provide methods for the optimization problem using MRFs, such as the iterated conditional modes (ICM) algorithm. Recently, a novel algorithm was presented based on graph-cut theory. This method transforms a MRF to an equivalent graph and solves the optimization problem by a max-flow/min-cut algorithm on the graph. In this study this graph-cut algorithm is applied iteratively to improve the coarse classification. At each iteration the parameters of the energy function for the current classification are set by the logarithmic probability density function (PDF). The relevant parameters are estimated by the method of logarithmic cumulants (MoLC). Experiments are performed using two flood events in Germany and Australia in 2011 and a forest fire on La Palma in 2009 using pre- and post-event TerraSAR-X data. The results show convincing coarse classifications and considerable improvement by the graph-cut post-classification step.

  5. A statistically harmonized alignment-classification in image space enables accurate and robust alignment of noisy images in single particle analysis.

    PubMed

    Kawata, Masaaki; Sato, Chikara

    2007-06-01

    In determining the three-dimensional (3D) structure of macromolecular assemblies in single particle analysis, a large representative dataset of two-dimensional (2D) average images from huge number of raw images is a key for high resolution. Because alignments prior to averaging are computationally intensive, currently available multireference alignment (MRA) software does not survey every possible alignment. This leads to misaligned images, creating blurred averages and reducing the quality of the final 3D reconstruction. We present a new method, in which multireference alignment is harmonized with classification (multireference multiple alignment: MRMA). This method enables a statistical comparison of multiple alignment peaks, reflecting the similarities between each raw image and a set of reference images. Among the selected alignment candidates for each raw image, misaligned images are statistically excluded, based on the principle that aligned raw images of similar projections have a dense distribution around the correctly aligned coordinates in image space. This newly developed method was examined for accuracy and speed using model image sets with various signal-to-noise ratios, and with electron microscope images of the Transient Receptor Potential C3 and the sodium channel. In every data set, the newly developed method outperformed conventional methods in robustness against noise and in speed, creating 2D average images of higher quality. This statistically harmonized alignment-classification combination should greatly improve the quality of single particle analysis.

  6. Sediment classification using neural networks: An example from the site-U1344A of IODP Expedition 323 in the Bering Sea

    NASA Astrophysics Data System (ADS)

    Ojha, Maheswar; Maiti, Saumen

    2016-03-01

    A novel approach based on the concept of Bayesian neural network (BNN) has been implemented for classifying sediment boundaries using downhole log data obtained during Integrated Ocean Drilling Program (IODP) Expedition 323 in the Bering Sea slope region. The Bayesian framework in conjunction with Markov Chain Monte Carlo (MCMC)/hybrid Monte Carlo (HMC) learning paradigm has been applied to constrain the lithology boundaries using density, density porosity, gamma ray, sonic P-wave velocity and electrical resistivity at the Hole U1344A. We have demonstrated the effectiveness of our supervised classification methodology by comparing our findings with a conventional neural network and a Bayesian neural network optimized by scaled conjugate gradient method (SCG), and tested the robustness of the algorithm in the presence of red noise in the data. The Bayesian results based on the HMC algorithm (BNN.HMC) resolve detailed finer structures at certain depths in addition to main lithology such as silty clay, diatom clayey silt and sandy silt. Our method also recovers the lithology information from a depth ranging between 615 and 655 m Wireline log Matched depth below Sea Floor of no core recovery zone. Our analyses demonstrate that the BNN based approach renders robust means for the classification of complex lithology successions at the Hole U1344A, which could be very useful for other studies and understanding the oceanic crustal inhomogeneity and structural discontinuities.

  7. Risk Assessment Stability: A Revalidation Study of the Arizona Risk/Needs Assessment Instrument

    ERIC Educational Resources Information Center

    Schwalbe, Craig S.

    2009-01-01

    The actuarial method is the gold standard for risk assessment in child welfare, juvenile justice, and criminal justice. It produces risk classifications that are highly predictive and that may be robust to sampling error. This article reports a revalidation study of the Arizona Risk/Needs Assessment instrument, an actuarial instrument for juvenile…

  8. Iterative Ellipsoidal Trimming.

    DTIC Science & Technology

    1980-02-11

    to above. Iterative ellipsoidal trimming has been investigated before by other statisticians, most notably by Gnanadesikan and his coworkers...J., Gnanadesikan R., and Kettenring, J. R. (1975). "Robust estimation and outlier detection with correlation coefficients." Biometrika. 62, 531-45. [6...Duda, Richard, and Hart, Peter (1973). Pattern Classification and Scene Analysis. Wiley, New York. [7] Gnanadesikan , R. (1977). Methods for

  9. Reframing the Principle of Specialisation in Legitimation Code Theory: A Blended Learning Perspective

    ERIC Educational Resources Information Center

    Owusu-Agyeman, Yaw; Larbi-Siaw, Otu

    2017-01-01

    This study argues that in developing a robust framework for students in a blended learning environment, Structural Alignment (SA) becomes the third principle of specialisation in addition to Epistemic Relation (ER) and Social Relation (SR). We provide an extended code: (ER+/-, SR+/-, SA+/-) that present strong classification and framing to the…

  10. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites.

    PubMed

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-03-08

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites.

  11. Robust Segmentation of Planar and Linear Features of Terrestrial Laser Scanner Point Clouds Acquired from Construction Sites

    PubMed Central

    Maalek, Reza; Lichti, Derek D; Ruwanpura, Janaka Y

    2018-01-01

    Automated segmentation of planar and linear features of point clouds acquired from construction sites is essential for the automatic extraction of building construction elements such as columns, beams and slabs. However, many planar and linear segmentation methods use scene-dependent similarity thresholds that may not provide generalizable solutions for all environments. In addition, outliers exist in construction site point clouds due to data artefacts caused by moving objects, occlusions and dust. To address these concerns, a novel method for robust classification and segmentation of planar and linear features is proposed. First, coplanar and collinear points are classified through a robust principal components analysis procedure. The classified points are then grouped using a new robust clustering method, the robust complete linkage method. A robust method is also proposed to extract the points of flat-slab floors and/or ceilings independent of the aforementioned stages to improve computational efficiency. The applicability of the proposed method is evaluated in eight datasets acquired from a complex laboratory environment and two construction sites at the University of Calgary. The precision, recall, and accuracy of the segmentation at both construction sites were 96.8%, 97.7% and 95%, respectively. These results demonstrate the suitability of the proposed method for robust segmentation of planar and linear features of contaminated datasets, such as those collected from construction sites. PMID:29518062

  12. The Consensus Molecular Subtypes of Colorectal Cancer

    PubMed Central

    Guinney, Justin; Dienstmann, Rodrigo; Wang, Xin; de Reyniès, Aurélien; Schlicker, Andreas; Soneson, Charlotte; Marisa, Laetitia; Roepman, Paul; Nyamundanda, Gift; Angelino, Paolo; Bot, Brian M.; Morris, Jeffrey S.; Simon, Iris M.; Gerster, Sarah; Fessler, Evelyn; de Sousa e Melo, Felipe; Missiaglia, Edoardo; Ramay, Hena; Barras, David; Homicsko, Krisztian; Maru, Dipen; Manyam, Ganiraju C.; Broom, Bradley; Boige, Valerie; Perez-Villamil, Beatriz; Laderas, Ted; Salazar, Ramon; Gray, Joe W.; Hanahan, Douglas; Tabernero, Josep; Bernards, Rene; Friend, Stephen H.; Laurent-Puig, Pierre; Medema, Jan Paul; Sadanandam, Anguraj; Wessels, Lodewyk; Delorenzi, Mauro; Kopetz, Scott; Vermeulen, Louis; Tejpar, Sabine

    2015-01-01

    Colorectal cancer (CRC) is a frequently lethal disease with heterogeneous outcomes and drug responses. To resolve inconsistencies among the reported gene expression–based CRC classifications and facilitate clinical translation, we formed an international consortium dedicated to large-scale data sharing and analytics across expert groups. We show marked interconnectivity between six independent classification systems coalescing into four consensus molecular subtypes (CMS) with distinguishing features: CMS1 (MSI Immune, 14%), hypermutated, microsatellite unstable, strong immune activation; CMS2 (Canonical, 37%), epithelial, chromosomally unstable, marked WNT and MYC signaling activation; CMS3 (Metabolic, 13%), epithelial, evident metabolic dysregulation; and CMS4 (Mesenchymal, 23%), prominent transforming growth factor β activation, stromal invasion, and angiogenesis. Samples with mixed features (13%) possibly represent a transition phenotype or intra-tumoral heterogeneity. We consider the CMS groups the most robust classification system currently available for CRC – with clear biological interpretability – and the basis for future clinical stratification and subtype–based targeted interventions. PMID:26457759

  13. Detection of eardrum abnormalities using ensemble deep learning approaches

    NASA Astrophysics Data System (ADS)

    Senaras, Caglar; Moberly, Aaron C.; Teknos, Theodoros; Essig, Garth; Elmaraghy, Charles; Taj-Schaal, Nazhat; Yua, Lianbo; Gurcan, Metin N.

    2018-02-01

    In this study, we proposed an approach to report the condition of the eardrum as "normal" or "abnormal" by ensembling two different deep learning architectures. In the first network (Network 1), we applied transfer learning to the Inception V3 network by using 409 labeled samples. As a second network (Network 2), we designed a convolutional neural network to take advantage of auto-encoders by using additional 673 unlabeled eardrum samples. The individual classification accuracies of the Network 1 and Network 2 were calculated as 84.4%(+/- 12.1%) and 82.6% (+/- 11.3%), respectively. Only 32% of the errors of the two networks were the same, making it possible to combine two approaches to achieve better classification accuracy. The proposed ensemble method allows us to achieve robust classification because it has high accuracy (84.4%) with the lowest standard deviation (+/- 10.3%).

  14. A Sieving ANN for Emotion-Based Movie Clip Classification

    NASA Astrophysics Data System (ADS)

    Watanapa, Saowaluk C.; Thipakorn, Bundit; Charoenkitkarn, Nipon

    Effective classification and analysis of semantic contents are very important for the content-based indexing and retrieval of video database. Our research attempts to classify movie clips into three groups of commonly elicited emotions, namely excitement, joy and sadness, based on a set of abstract-level semantic features extracted from the film sequence. In particular, these features consist of six visual and audio measures grounded on the artistic film theories. A unique sieving-structured neural network is proposed to be the classifying model due to its robustness. The performance of the proposed model is tested with 101 movie clips excerpted from 24 award-winning and well-known Hollywood feature films. The experimental result of 97.8% correct classification rate, measured against the collected human-judges, indicates the great potential of using abstract-level semantic features as an engineered tool for the application of video-content retrieval/indexing.

  15. a Novel Deep Convolutional Neural Network for Spectral-Spatial Classification of Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Li, N.; Wang, C.; Zhao, H.; Gong, X.; Wang, D.

    2018-04-01

    Spatial and spectral information are obtained simultaneously by hyperspectral remote sensing. Joint extraction of these information of hyperspectral image is one of most import methods for hyperspectral image classification. In this paper, a novel deep convolutional neural network (CNN) is proposed, which extracts spectral-spatial information of hyperspectral images correctly. The proposed model not only learns sufficient knowledge from the limited number of samples, but also has powerful generalization ability. The proposed framework based on three-dimensional convolution can extract spectral-spatial features of labeled samples effectively. Though CNN has shown its robustness to distortion, it cannot extract features of different scales through the traditional pooling layer that only have one size of pooling window. Hence, spatial pyramid pooling (SPP) is introduced into three-dimensional local convolutional filters for hyperspectral classification. Experimental results with a widely used hyperspectral remote sensing dataset show that the proposed model provides competitive performance.

  16. Automating the expert consensus paradigm for robust lung tissue classification

    NASA Astrophysics Data System (ADS)

    Rajagopalan, Srinivasan; Karwoski, Ronald A.; Raghunath, Sushravya; Bartholmai, Brian J.; Robb, Richard A.

    2012-03-01

    Clinicians confirm the efficacy of dynamic multidisciplinary interactions in diagnosing Lung disease/wellness from CT scans. However, routine clinical practice cannot readily accomodate such interactions. Current schemes for automating lung tissue classification are based on a single elusive disease differentiating metric; this undermines their reliability in routine diagnosis. We propose a computational workflow that uses a collection (#: 15) of probability density functions (pdf)-based similarity metrics to automatically cluster pattern-specific (#patterns: 5) volumes of interest (#VOI: 976) extracted from the lung CT scans of 14 patients. The resultant clusters are refined for intra-partition compactness and subsequently aggregated into a super cluster using a cluster ensemble technique. The super clusters were validated against the consensus agreement of four clinical experts. The aggregations correlated strongly with expert consensus. By effectively mimicking the expertise of physicians, the proposed workflow could make automation of lung tissue classification a clinical reality.

  17. A new blood vessel extraction technique using edge enhancement and object classification.

    PubMed

    Badsha, Shahriar; Reza, Ahmed Wasif; Tan, Kim Geok; Dimyati, Kaharudin

    2013-12-01

    Diabetic retinopathy (DR) is increasing progressively pushing the demand of automatic extraction and classification of severity of diseases. Blood vessel extraction from the fundus image is a vital and challenging task. Therefore, this paper presents a new, computationally simple, and automatic method to extract the retinal blood vessel. The proposed method comprises several basic image processing techniques, namely edge enhancement by standard template, noise removal, thresholding, morphological operation, and object classification. The proposed method has been tested on a set of retinal images. The retinal images were collected from the DRIVE database and we have employed robust performance analysis to evaluate the accuracy. The results obtained from this study reveal that the proposed method offers an average accuracy of about 97 %, sensitivity of 99 %, specificity of 86 %, and predictive value of 98 %, which is superior to various well-known techniques.

  18. New nonlinear features for inspection, robotics, and face recognition

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Talukder, Ashit

    1999-10-01

    Classification of real-time X-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items (pistachio nuts). This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work, the MRDF is applied to standard features (rather than iconic data). The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC (receiver operating characteristic) data. Other applications of these new feature spaces in robotics and face recognition are also noted.

  19. Automated classification of multiphoton microscopy images of ovarian tissue using deep learning.

    PubMed

    Huttunen, Mikko J; Hassan, Abdurahman; McCloskey, Curtis W; Fasih, Sijyl; Upham, Jeremy; Vanderhyden, Barbara C; Boyd, Robert W; Murugkar, Sangeeta

    2018-06-01

    Histopathological image analysis of stained tissue slides is routinely used in tumor detection and classification. However, diagnosis requires a highly trained pathologist and can thus be time-consuming, labor-intensive, and potentially risk bias. Here, we demonstrate a potential complementary approach for diagnosis. We show that multiphoton microscopy images from unstained, reproductive tissues can be robustly classified using deep learning techniques. We fine-train four pretrained convolutional neural networks using over 200 murine tissue images based on combined second-harmonic generation and two-photon excitation fluorescence contrast, to classify the tissues either as healthy or associated with high-grade serous carcinoma with over 95% sensitivity and 97% specificity. Our approach shows promise for applications involving automated disease diagnosis. It could also be readily applied to other tissues, diseases, and related classification problems. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  20. iPcc: a novel feature extraction method for accurate disease class discovery and prediction

    PubMed Central

    Ren, Xianwen; Wang, Yong; Zhang, Xiang-Sun; Jin, Qi

    2013-01-01

    Gene expression profiling has gradually become a routine procedure for disease diagnosis and classification. In the past decade, many computational methods have been proposed, resulting in great improvements on various levels, including feature selection and algorithms for classification and clustering. In this study, we present iPcc, a novel method from the feature extraction perspective to further propel gene expression profiling technologies from bench to bedside. We define ‘correlation feature space’ for samples based on the gene expression profiles by iterative employment of Pearson’s correlation coefficient. Numerical experiments on both simulated and real gene expression data sets demonstrate that iPcc can greatly highlight the latent patterns underlying noisy gene expression data and thus greatly improve the robustness and accuracy of the algorithms currently available for disease diagnosis and classification based on gene expression profiles. PMID:23761440

  1. Network-Induced Classification Kernels for Gene Expression Profile Analysis

    PubMed Central

    Dror, Gideon; Shamir, Ron

    2012-01-01

    Abstract Computational classification of gene expression profiles into distinct disease phenotypes has been highly successful to date. Still, robustness, accuracy, and biological interpretation of the results have been limited, and it was suggested that use of protein interaction information jointly with the expression profiles can improve the results. Here, we study three aspects of this problem. First, we show that interactions are indeed relevant by showing that co-expressed genes tend to be closer in the network of interactions. Second, we show that the improved performance of one extant method utilizing expression and interactions is not really due to the biological information in the network, while in another method this is not the case. Finally, we develop a new kernel method—called NICK—that integrates network and expression data for SVM classification, and demonstrate that overall it achieves better results than extant methods while running two orders of magnitude faster. PMID:22697242

  2. Novel classification system of rib fractures observed in infants.

    PubMed

    Love, Jennifer C; Derrick, Sharon M; Wiersema, Jason M; Pinto, Deborrah C; Greeley, Christopher; Donaruma-Kwoh, Marcella; Bista, Bibek

    2013-03-01

    Rib fractures are considered highly suspicious for nonaccidental injury in the pediatric clinical literature; however, a rib fracture classification system has not been developed. As an aid and impetus for rib fracture research, we developed a concise schema for classifying rib fracture types and fracture location that is applicable to infants. The system defined four fracture types (sternal end, buckle, transverse, and oblique) and four regions of the rib (posterior, posterolateral, anterolateral, and anterior). It was applied to all rib fractures observed during 85 consecutive infant autopsies. Rib fractures were found in 24 (28%) of the cases. A total of 158 rib fractures were identified. The proposed schema was adequate to classify 153 (97%) of the observed fractures. The results indicate that the classification system is sufficiently robust to classify rib fractures typically observed in infants and should be used by researchers investigating infant rib fractures. © 2013 American Academy of Forensic Sciences.

  3. Topic Identification and Categorization of Public Information in Community-Based Social Media

    NASA Astrophysics Data System (ADS)

    Kusumawardani, RP; Basri, MH

    2017-01-01

    This paper presents a work on a semi-supervised method for topic identification and classification of short texts in the social media, and its application on tweets containing dialogues in a large community of dwellers in a city, written mostly in Indonesian. These dialogues comprise a wealth of information about the city, shared in real-time. We found that despite the high irregularity of the language used, and the scarcity of suitable linguistic resources, a meaningful identification of topics could be performed by clustering the tweets using the K-Means algorithm. The resulting clusters are found to be robust enough to be the basis of a classification. On three grouping schemes derived from the clusters, we get accuracy of 95.52%, 95.51%, and 96.7 using linear SVMs, reflecting the applicability of applying this method for generating topic identification and classification on such data.

  4. Classification of cancerous cells based on the one-class problem approach

    NASA Astrophysics Data System (ADS)

    Murshed, Nabeel A.; Bortolozzi, Flavio; Sabourin, Robert

    1996-03-01

    One of the most important factors in reducing the effect of cancerous diseases is the early diagnosis, which requires a good and a robust method. With the advancement of computer technologies and digital image processing, the development of a computer-based system has become feasible. In this paper, we introduce a new approach for the detection of cancerous cells. This approach is based on the one-class problem approach, through which the classification system need only be trained with patterns of cancerous cells. This reduces the burden of the training task by about 50%. Based on this approach, a computer-based classification system is developed, based on the Fuzzy ARTMAP neural networks. Experimental results were performed using a set of 542 patterns taken from a sample of breast cancer. Results of the experiment show 98% correct identification of cancerous cells and 95% correct identification of non-cancerous cells.

  5. Classification of rainfall events for weather forecasting purposes in andean region of Colombia

    NASA Astrophysics Data System (ADS)

    Suárez Hincapié, Joan Nathalie; Romo Melo, Liliana; Vélez Upegui, Jorge Julian; Chang, Philippe

    2016-04-01

    This work presents a comparative analysis of the results of applying different methodologies for the identification and classification of rainfall events of different duration in meteorological records of the Colombian Andean region. In this study the work area is the urban and rural area of Manizales that counts with a monitoring hydro-meteorological network. This network is composed of forty-five (45) strategically located stations, this network is composed of forty-five (45) strategically located stations where automatic weather stations record seven climate variables: air temperature, relative humidity, wind speed and direction, rainfall, solar radiation and barometric pressure. All this information is sent wirelessly every five (5) minutes to a data warehouse located at the Institute of Environmental Studies-IDEA. With obtaining the series of rainfall recorded by the hydrometeorological station Palogrande operated by the National University of Colombia in Manizales (http://froac.manizales.unal.edu.co/bodegaIdea/); it is with this information that we proceed to perform behavior analysis of other meteorological variables, monitored at surface level and that influence the occurrence of such rainfall events. To classify rainfall events different methodologies were used: The first according to Monjo (2009) where the index n of the heavy rainfall was calculated through which various types of precipitation are defined according to the intensity variability. A second methodology that permitted to produce a classification in terms of a parameter β introduced by Rice and Holmberg (1973) and adapted by Llasat and Puigcerver, (1985, 1997) and the last one where a rainfall classification is performed according to the value of its intensity following the issues raised by Linsley (1977) where the rains can be considered light, moderate and strong fall rates to 2.5 mm / h; from 2.5 to 7.6 mm / h and above this value respectively for the previous classifications. The main contribution which is done with this research is the obtainment elements to optimize and to improve the spatial resolution of the results obtained with mesoscale models such as the Weather Research & Forecasting Model- WRF, used in Colombia for the purposes of weather forecasting and that in addition produces other tools used in current issues such as risk management.

  6. Enhancement of the Logistics Battle Command Model: Architecture Upgrades and Attrition Module Development

    DTIC Science & Technology

    2017-01-05

    module. 15. SUBJECT TERMS Logistics, attrition, discrete event simulation, Simkit, LBC 16. SECURITY CLASSIFICATION OF: Unclassified 17. LIMITATION...stochastics, and discrete event model programmed in Java building largely on the Simkit library. The primary purpose of the LBC model is to support...equations makes them incompatible with the discrete event construct of LBC. Bullard further advances this methodology by developing a stochastic

  7. Applying a Hidden Markov Model-Based Event Detection and Classification Algorithm to Apollo Lunar Seismic Data

    NASA Astrophysics Data System (ADS)

    Knapmeyer-Endrun, B.; Hammer, C.

    2014-12-01

    The seismometers that the Apollo astronauts deployed on the Moon provide the only recordings of seismic events from any extra-terrestrial body so far. These lunar events are significantly different from ones recorded on Earth, in terms of both signal shape and source processes. Thus they are a valuable test case for any experiment in planetary seismology. In this study, we analyze Apollo 16 data with a single-station event detection and classification algorithm in view of NASA's upcoming InSight mission to Mars. InSight, scheduled for launch in early 2016, has the goal to investigate Mars' internal structure by deploying a seismometer on its surface. As the mission does not feature any orbiter, continuous data will be relayed to Earth at a reduced rate. Full range data will only be available by requesting specific time-windows within a few days after the receipt of the original transmission. We apply a recently introduced algorithm based on hidden Markov models that requires only a single example waveform of each event class for training appropriate models. After constructing the prototypes we detect and classify impacts and deep and shallow moonquakes. Initial results for 1972 (year of station installation with 8 months of data) indicate a high detection rate of over 95% for impacts, of which more than 80% are classified correctly. Deep moonquakes, which occur in large amounts, but often show only very weak signals, are detected with less certainty (~70%). As there is only one weak shallow moonquake covered, results for this event class are not statistically significant. Daily adjustments of the background noise model help to reduce false alarms, which are mainly erroneous deep moonquake detections, by about 25%. The algorithm enables us to classify events that were previously listed in the catalog without classification, and, through the combined use of long period and short period data, identify some unlisted local impacts as well as at least two yet unreported deep moonquakes.

  8. Toward Automated Cochlear Implant Fitting Procedures Based on Event-Related Potentials.

    PubMed

    Finke, Mareike; Billinger, Martin; Büchner, Andreas

    Cochlear implants (CIs) restore hearing to the profoundly deaf by direct electrical stimulation of the auditory nerve. To provide an optimal electrical stimulation pattern the CI must be individually fitted to each CI user. To date, CI fitting is primarily based on subjective feedback from the user. However, not all CI users are able to provide such feedback, for example, small children. This study explores the possibility of using the electroencephalogram (EEG) to objectively determine if CI users are able to hear differences in tones presented to them, which has potential applications in CI fitting or closed loop systems. Deviant and standard stimuli were presented to 12 CI users in an active auditory oddball paradigm. The EEG was recorded in two sessions and classification of the EEG data was performed with shrinkage linear discriminant analysis. Also, the impact of CI artifact removal on classification performance and the possibility to reuse a trained classifier in future sessions were evaluated. Overall, classification performance was above chance level for all participants although performance varied considerably between participants. Also, artifacts were successfully removed from the EEG without impairing classification performance. Finally, reuse of the classifier causes only a small loss in classification performance. Our data provide first evidence that EEG can be automatically classified on single-trial basis in CI users. Despite the slightly poorer classification performance over sessions, classifier and CI artifact correction appear stable over successive sessions. Thus, classifier and artifact correction weights can be reused without repeating the set-up procedure in every session, which makes the technique easier applicable. With our present data, we can show successful classification of event-related cortical potential patterns in CI users. In the future, this has the potential to objectify and automate parts of CI fitting procedures.

  9. Membrane Resonance Enables Stable and Robust Gamma Oscillations

    PubMed Central

    Moca, Vasile V.; Nikolić, Danko; Singer, Wolf; Mureşan, Raul C.

    2014-01-01

    Neuronal mechanisms underlying beta/gamma oscillations (20–80 Hz) are not completely understood. Here, we show that in vivo beta/gamma oscillations in the cat visual cortex sometimes exhibit remarkably stable frequency even when inputs fluctuate dramatically. Enhanced frequency stability is associated with stronger oscillations measured in individual units and larger power in the local field potential. Simulations of neuronal circuitry demonstrate that membrane properties of inhibitory interneurons strongly determine the characteristics of emergent oscillations. Exploration of networks containing either integrator or resonator inhibitory interneurons revealed that: (i) Resonance, as opposed to integration, promotes robust oscillations with large power and stable frequency via a mechanism called RING (Resonance INduced Gamma); resonance favors synchronization by reducing phase delays between interneurons and imposes bounds on oscillation cycle duration; (ii) Stability of frequency and robustness of the oscillation also depend on the relative timing of excitatory and inhibitory volleys within the oscillation cycle; (iii) RING can reproduce characteristics of both Pyramidal INterneuron Gamma (PING) and INterneuron Gamma (ING), transcending such classifications; (iv) In RING, robust gamma oscillations are promoted by slow but are impaired by fast inputs. Results suggest that interneuronal membrane resonance can be an important ingredient for generation of robust gamma oscillations having stable frequency. PMID:23042733

  10. Multiscale Region-Level VHR Image Change Detection via Sparse Change Descriptor and Robust Discriminative Dictionary Learning

    PubMed Central

    Xu, Yuan; Ding, Kun; Huo, Chunlei; Zhong, Zisha; Li, Haichang; Pan, Chunhong

    2015-01-01

    Very high resolution (VHR) image change detection is challenging due to the low discriminative ability of change feature and the difficulty of change decision in utilizing the multilevel contextual information. Most change feature extraction techniques put emphasis on the change degree description (i.e., in what degree the changes have happened), while they ignore the change pattern description (i.e., how the changes changed), which is of equal importance in characterizing the change signatures. Moreover, the simultaneous consideration of the classification robust to the registration noise and the multiscale region-consistent fusion is often neglected in change decision. To overcome such drawbacks, in this paper, a novel VHR image change detection method is proposed based on sparse change descriptor and robust discriminative dictionary learning. Sparse change descriptor combines the change degree component and the change pattern component, which are encoded by the sparse representation error and the morphological profile feature, respectively. Robust change decision is conducted by multiscale region-consistent fusion, which is implemented by the superpixel-level cosparse representation with robust discriminative dictionary and the conditional random field model. Experimental results confirm the effectiveness of the proposed change detection technique. PMID:25918748

  11. EEG Recording and Online Signal Processing on Android: A Multiapp Framework for Brain-Computer Interfaces on Smartphone

    PubMed Central

    Debener, Stefan; Emkes, Reiner; Volkening, Nils; Fudickar, Sebastian; Bleichner, Martin G.

    2017-01-01

    Objective Our aim was the development and validation of a modular signal processing and classification application enabling online electroencephalography (EEG) signal processing on off-the-shelf mobile Android devices. The software application SCALA (Signal ProCessing and CLassification on Android) supports a standardized communication interface to exchange information with external software and hardware. Approach In order to implement a closed-loop brain-computer interface (BCI) on the smartphone, we used a multiapp framework, which integrates applications for stimulus presentation, data acquisition, data processing, classification, and delivery of feedback to the user. Main Results We have implemented the open source signal processing application SCALA. We present timing test results supporting sufficient temporal precision of audio events. We also validate SCALA with a well-established auditory selective attention paradigm and report above chance level classification results for all participants. Regarding the 24-channel EEG signal quality, evaluation results confirm typical sound onset auditory evoked potentials as well as cognitive event-related potentials that differentiate between correct and incorrect task performance feedback. Significance We present a fully smartphone-operated, modular closed-loop BCI system that can be combined with different EEG amplifiers and can easily implement other paradigms. PMID:29349070

  12. Multiclass classification of obstructive sleep apnea/hypopnea based on a convolutional neural network from a single-lead electrocardiogram.

    PubMed

    Urtnasan, Erdenebayar; Park, Jong-Uk; Lee, Kyoung-Joung

    2018-05-24

    In this paper, we propose a convolutional neural network (CNN)-based deep learning architecture for multiclass classification of obstructive sleep apnea and hypopnea (OSAH) using single-lead electrocardiogram (ECG) recordings. OSAH is the most common sleep-related breathing disorder. Many subjects who suffer from OSAH remain undiagnosed; thus, early detection of OSAH is important. In this study, automatic classification of three classes-normal, hypopnea, and apnea-based on a CNN is performed. An optimal six-layer CNN model is trained on a training dataset (45,096 events) and evaluated on a test dataset (11,274 events). The training set (69 subjects) and test set (17 subjects) were collected from 86 subjects with length of approximately 6 h and segmented into 10 s durations. The proposed CNN model reaches a mean -score of 93.0 for the training dataset and 87.0 for the test dataset. Thus, proposed deep learning architecture achieved a high performance for multiclass classification of OSAH using single-lead ECG recordings. The proposed method can be employed in screening of patients suspected of having OSAH. © 2018 Institute of Physics and Engineering in Medicine.

  13. EEG Recording and Online Signal Processing on Android: A Multiapp Framework for Brain-Computer Interfaces on Smartphone.

    PubMed

    Blum, Sarah; Debener, Stefan; Emkes, Reiner; Volkening, Nils; Fudickar, Sebastian; Bleichner, Martin G

    2017-01-01

    Our aim was the development and validation of a modular signal processing and classification application enabling online electroencephalography (EEG) signal processing on off-the-shelf mobile Android devices. The software application SCALA (Signal ProCessing and CLassification on Android) supports a standardized communication interface to exchange information with external software and hardware. In order to implement a closed-loop brain-computer interface (BCI) on the smartphone, we used a multiapp framework, which integrates applications for stimulus presentation, data acquisition, data processing, classification, and delivery of feedback to the user. We have implemented the open source signal processing application SCALA. We present timing test results supporting sufficient temporal precision of audio events. We also validate SCALA with a well-established auditory selective attention paradigm and report above chance level classification results for all participants. Regarding the 24-channel EEG signal quality, evaluation results confirm typical sound onset auditory evoked potentials as well as cognitive event-related potentials that differentiate between correct and incorrect task performance feedback. We present a fully smartphone-operated, modular closed-loop BCI system that can be combined with different EEG amplifiers and can easily implement other paradigms.

  14. Remote sensing based detection of forested wetlands: An evaluation of LiDAR, aerial imagery, and their data fusion

    NASA Astrophysics Data System (ADS)

    Suiter, Ashley Elizabeth

    Multi-spectral imagery provides a robust and low-cost dataset for assessing wetland extent and quality over broad regions and is frequently used for wetland inventories. However in forested wetlands, hydrology is obscured by tree canopy making it difficult to detect with multi-spectral imagery alone. Because of this, classification of forested wetlands often includes greater errors than that of other wetlands types. Elevation and terrain derivatives have been shown to be useful for modelling wetland hydrology. But, few studies have addressed the use of LiDAR intensity data detecting hydrology in forested wetlands. Due the tendency of LiDAR signal to be attenuated by water, this research proposed the fusion of LiDAR intensity data with LiDAR elevation, terrain data, and aerial imagery, for the detection of forested wetland hydrology. We examined the utility of LiDAR intensity data and determined whether the fusion of Lidar derived data with multispectral imagery increased the accuracy of forested wetland classification compared with a classification performed with only multi-spectral image. Four classifications were performed: Classification A -- All Imagery, Classification B -- All LiDAR, Classification C -- LiDAR without Intensity, and Classification D -- Fusion of All Data. These classifications were performed using random forest and each resulted in a 3-foot resolution thematic raster of forested upland and forested wetland locations in Vermilion County, Illinois. The accuracies of these classifications were compared using Kappa Coefficient of Agreement. Importance statistics produced within the random forest classifier were evaluated in order to understand the contribution of individual datasets. Classification D, which used the fusion of LiDAR and multi-spectral imagery as input variables, had moderate to strong agreement between reference data and classification results. It was found that Classification A performed using all the LiDAR data and its derivatives (intensity, elevation, slope, aspect, curvatures, and Topographic Wetness Index) was the most accurate classification with Kappa: 78.04%, indicating moderate to strong agreement. However, Classification C, performed with LiDAR derivative without intensity data had less agreement than would be expected by chance, indicating that LiDAR contributed significantly to the accuracy of Classification B.

  15. In-vivo determination of chewing patterns using FBG and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Pegorini, Vinicius; Zen Karam, Leandro; Rocha Pitta, Christiano S.; Ribeiro, Richardson; Simioni Assmann, Tangriani; Cardozo da Silva, Jean Carlos; Bertotti, Fábio L.; Kalinowski, Hypolito J.; Cardoso, Rafael

    2015-09-01

    This paper reports the process of pattern classification of the chewing process of ruminants. We propose a simplified signal processing scheme for optical fiber Bragg grating (FBG) sensors based on machine learning techniques. The FBG sensors measure the biomechanical forces during jaw movements and an artificial neural network is responsible for the classification of the associated chewing pattern. In this study, three patterns associated to dietary supplement, hay and ryegrass were considered. Additionally, two other important events for ingestive behavior studies were monitored, rumination and idle period. Experimental results show that the proposed approach for pattern classification has been capable of differentiating the materials involved in the chewing process with a small classification error.

  16. Does Making Something Move Matter? Representations of Goals and Sources in Motion Events with Causal Sources

    ERIC Educational Resources Information Center

    Lakusta, Laura; Muentener, Paul; Petrillo, Lauren; Mullanaphy, Noelle; Muniz, Lauren

    2017-01-01

    Previous studies have shown a robust bias to express the goal path over the source path when describing events ("the bird flew into the pitcher," rather than "… out of the bucket into the pitcher"). Motivated by linguistic theory, this study manipulated the causal structure of events (specifically, making the source cause the…

  17. Whole Blood Gene Expression Profiling Predicts Severe Morbidity and Mortality in Cystic Fibrosis: A 5-Year Follow-Up Study.

    PubMed

    Saavedra, Milene T; Quon, Bradley S; Faino, Anna; Caceres, Silvia M; Poch, Katie R; Sanders, Linda A; Malcolm, Kenneth C; Nichols, David P; Sagel, Scott D; Taylor-Cousar, Jennifer L; Leach, Sonia M; Strand, Matthew; Nick, Jerry A

    2018-05-01

    Cystic fibrosis pulmonary exacerbations accelerate pulmonary decline and increase mortality. Previously, we identified a 10-gene leukocyte panel measured directly from whole blood, which indicates response to exacerbation treatment. We hypothesized that molecular characteristics of exacerbations could also predict future disease severity. We tested whether a 10-gene panel measured from whole blood could identify patient cohorts at increased risk for severe morbidity and mortality, beyond standard clinical measures. Transcript abundance for the 10-gene panel was measured from whole blood at the beginning of exacerbation treatment (n = 57). A hierarchical cluster analysis of subjects based on their gene expression was performed, yielding four molecular clusters. An analysis of cluster membership and outcomes incorporating an independent cohort (n = 21) was completed to evaluate robustness of cluster partitioning of genes to predict severe morbidity and mortality. The four molecular clusters were analyzed for differences in forced expiratory volume in 1 second, C-reactive protein, return to baseline forced expiratory volume in 1 second after treatment, time to next exacerbation, and time to morbidity or mortality events (defined as lung transplant referral, lung transplant, intensive care unit admission for respiratory insufficiency, or death). Clustering based on gene expression discriminated between patient groups with significant differences in forced expiratory volume in 1 second, admission frequency, and overall morbidity and mortality. At 5 years, all subjects in cluster 1 (very low risk) were alive and well, whereas 90% of subjects in cluster 4 (high risk) had suffered a major event (P = 0.0001). In multivariable analysis, the ability of gene expression to predict clinical outcomes remained significant, despite adjustment for forced expiratory volume in 1 second, sex, and admission frequency. The robustness of gene clustering to categorize patients appropriately in terms of clinical characteristics, and short- and long-term clinical outcomes, remained consistent, even when adding in a secondary population with significantly different clinical outcomes. Whole blood gene expression profiling allows molecular classification of acute pulmonary exacerbations, beyond standard clinical measures, providing a predictive tool for identifying subjects at increased risk for mortality and disease progression.

  18. Comparisons and Selections of Features and Classifiers for Short Text Classification

    NASA Astrophysics Data System (ADS)

    Wang, Ye; Zhou, Zhi; Jin, Shan; Liu, Debin; Lu, Mi

    2017-10-01

    Short text is considerably different from traditional long text documents due to its shortness and conciseness, which somehow hinders the applications of conventional machine learning and data mining algorithms in short text classification. According to traditional artificial intelligence methods, we divide short text classification into three steps, namely preprocessing, feature selection and classifier comparison. In this paper, we have illustrated step-by-step how we approach our goals. Specifically, in feature selection, we compared the performance and robustness of the four methods of one-hot encoding, tf-idf weighting, word2vec and paragraph2vec, and in the classification part, we deliberately chose and compared Naive Bayes, Logistic Regression, Support Vector Machine, K-nearest Neighbor and Decision Tree as our classifiers. Then, we compared and analysed the classifiers horizontally with each other and vertically with feature selections. Regarding the datasets, we crawled more than 400,000 short text files from Shanghai and Shenzhen Stock Exchanges and manually labeled them into two classes, the big and the small. There are eight labels in the big class, and 59 labels in the small class.

  19. CW-SSIM kernel based random forest for image classification

    NASA Astrophysics Data System (ADS)

    Fan, Guangzhe; Wang, Zhou; Wang, Jiheng

    2010-07-01

    Complex wavelet structural similarity (CW-SSIM) index has been proposed as a powerful image similarity metric that is robust to translation, scaling and rotation of images, but how to employ it in image classification applications has not been deeply investigated. In this paper, we incorporate CW-SSIM as a kernel function into a random forest learning algorithm. This leads to a novel image classification approach that does not require a feature extraction or dimension reduction stage at the front end. We use hand-written digit recognition as an example to demonstrate our algorithm. We compare the performance of the proposed approach with random forest learning based on other kernels, including the widely adopted Gaussian and the inner product kernels. Empirical evidences show that the proposed method is superior in its classification power. We also compared our proposed approach with the direct random forest method without kernel and the popular kernel-learning method support vector machine. Our test results based on both simulated and realworld data suggest that the proposed approach works superior to traditional methods without the feature selection procedure.

  20. Galaxy Zoo: Infrared and Optical Morphology

    NASA Astrophysics Data System (ADS)

    Carla Shanahan, Jesse; Lintott, Chris; Zoo, Galaxy

    2018-01-01

    We present the detailed, visual morphologies of approximately 60,000 galaxies observed by the UKIRT Infrared Deep Sky Survey and then classified by participants in the Galaxy Zoo project. Our sample is composed entirely of nearby objects with redshifts of z ≤ 0.3, which enables us to robustly analyze their morphological characteristics including smoothness, bulge properties, spiral structure, and evidence of bars or rings. The determination of these features is made via a consensus-based analysis of the Galaxy Zoo project data in which inconsistent and outlying classifications are statistically down-weighted. We then compare these classifications of infrared morphology to the objects’ optical classifications in the Galaxy Zoo 2 release (Willett et al. 2013). It is already known that morphology is an effective tool for uncovering a galaxy’s dynamical past, and previous studies have shown significant correlations with physical characteristics such as stellar mass distribution and star formation history. We show that majority of the sample has agreement or expected differences between the optical and infrared classifications, but also present a preliminary analysis of a subsample of objects with striking discrepancies.

  1. Detecting fast and thermal neutrons with a boron loaded liquid scintillator, EJ-339A.

    PubMed

    Pino, F; Stevanato, L; Cester, D; Nebbia, G; Sajo-Bohus, L; Viesti, G

    2014-09-01

    A commercial boron-loaded liquid scintillator EJ-339 A was studied, using a (252)Cf source with/without polyethylene moderator, to examine the possibility of discriminating slow-neutron induced events in (10)B from fast-neutron events, resulting from proton recoils, and gamma-ray events. Despite the strong light quenching associated with neutron induced events in (10)B, correct classification of these events is shown to be possible with the aid of digital signal processing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Supervised machine learning on a network scale: application to seismic event classification and detection

    NASA Astrophysics Data System (ADS)

    Reynen, Andrew; Audet, Pascal

    2017-09-01

    A new method using a machine learning technique is applied to event classification and detection at seismic networks. This method is applicable to a variety of network sizes and settings. The algorithm makes use of a small catalogue of known observations across the entire network. Two attributes, the polarization and frequency content, are used as input to regression. These attributes are extracted at predicted arrival times for P and S waves using only an approximate velocity model, as attributes are calculated over large time spans. This method of waveform characterization is shown to be able to distinguish between blasts and earthquakes with 99 per cent accuracy using a network of 13 stations located in Southern California. The combination of machine learning with generalized waveform features is further applied to event detection in Oklahoma, United States. The event detection algorithm makes use of a pair of unique seismic phases to locate events, with a precision directly related to the sampling rate of the generalized waveform features. Over a week of data from 30 stations in Oklahoma, United States are used to automatically detect 25 times more events than the catalogue of the local geological survey, with a false detection rate of less than 2 per cent. This method provides a highly confident way of detecting and locating events. Furthermore, a large number of seismic events can be automatically detected with low false alarm, allowing for a larger automatic event catalogue with a high degree of trust.

  3. Introduction to the Apollo collections: Part 2: Lunar breccias

    NASA Technical Reports Server (NTRS)

    Mcgee, P. E.; Simonds, C. H.; Warner, J. L.; Phinney, W. C.

    1979-01-01

    Basic petrographic, chemical and age data for a representative suite of lunar breccias are presented for students and potential lunar sample investigators. Emphasis is on sample description and data presentation. Samples are listed, together with a classification scheme based on matrix texture and mineralogy and the nature and abundance of glass present both in the matrix and as clasts. A calculus of the classification scheme, describes the characteristic features of each of the breccia groups. The cratering process which describes the sequence of events immediately following an impact event is discussed, especially the thermal and material transport processes affecting the two major components of lunar breccias (clastic debris and fused material).

  4. Threat of death and autobiographical memory: a study of passengers from Flight AT236.

    PubMed

    McKinnon, Margaret C; Palombo, Daniela J; Nazarov, Anthony; Kumar, Namita; Khuu, Wayne; Levine, Brian

    2015-06-01

    We investigated autobiographical memory in a group of passengers onboard a trans-Atlantic flight that nearly ditched at sea. The consistency of traumatic exposure across passengers, some of whom developed post-traumatic stress disorder (PTSD), provided a unique opportunity to assess verified memory for life-threatening trauma. Using the Autobiographical Interview, which separates episodic from non-episodic details, passengers and healthy controls (HCs) recalled three events: the airline disaster (or a highly negative event for HCs), the September 11, 2001 attacks, and a non-emotional event. All passengers showed robust mnemonic enhancement for episodic details of the airline disaster. Although neither richness nor accuracy of traumatic recollection was related to PTSD, production of non-episodic details for traumatic and non-traumatic events was elevated in PTSD passengers. These findings indicate a robust mnemonic enhancement for trauma that is not specific to PTSD. Rather, PTSD is associated with altered cognitive control operations that affect autobiographical memory in general.

  5. Robust point cloud classification based on multi-level semantic relationships for urban scenes

    NASA Astrophysics Data System (ADS)

    Zhu, Qing; Li, Yuan; Hu, Han; Wu, Bo

    2017-07-01

    The semantic classification of point clouds is a fundamental part of three-dimensional urban reconstruction. For datasets with high spatial resolution but significantly more noises, a general trend is to exploit more contexture information to surmount the decrease of discrimination of features for classification. However, previous works on adoption of contexture information are either too restrictive or only in a small region and in this paper, we propose a point cloud classification method based on multi-level semantic relationships, including point-homogeneity, supervoxel-adjacency and class-knowledge constraints, which is more versatile and incrementally propagate the classification cues from individual points to the object level and formulate them as a graphical model. The point-homogeneity constraint clusters points with similar geometric and radiometric properties into regular-shaped supervoxels that correspond to the vertices in the graphical model. The supervoxel-adjacency constraint contributes to the pairwise interactions by providing explicit adjacent relationships between supervoxels. The class-knowledge constraint operates at the object level based on semantic rules, guaranteeing the classification correctness of supervoxel clusters at that level. International Society of Photogrammetry and Remote Sensing (ISPRS) benchmark tests have shown that the proposed method achieves state-of-the-art performance with an average per-area completeness and correctness of 93.88% and 95.78%, respectively. The evaluation of classification of photogrammetric point clouds and DSM generated from aerial imagery confirms the method's reliability in several challenging urban scenes.

  6. Large-scale classification of traffic signs under real-world conditions

    NASA Astrophysics Data System (ADS)

    Hazelhoff, Lykele; Creusen, Ivo; van de Wouw, Dennis; de With, Peter H. N.

    2012-02-01

    Traffic sign inventories are important to governmental agencies as they facilitate evaluation of traffic sign locations and are beneficial for road and sign maintenance. These inventories can be created (semi-)automatically based on street-level panoramic images. In these images, object detection is employed to detect the signs in each image, followed by a classification stage to retrieve the specific sign type. Classification of traffic signs is a complicated matter, since sign types are very similar with only minor differences within the sign, a high number of different signs is involved and multiple distortions occur, including variations in capturing conditions, occlusions, viewpoints and sign deformations. Therefore, we propose a method for robust classification of traffic signs, based on the Bag of Words approach for generic object classification. We extend the approach with a flexible, modular codebook to model the specific features of each sign type independently, in order to emphasize at the inter-sign differences instead of the parts common for all sign types. Additionally, this allows us to model and label the present false detections. Furthermore, analysis of the classification output provides the unreliable results. This classification system has been extensively tested for three different sign classes, covering 60 different sign types in total. These three data sets contain the sign detection results on street-level panoramic images, extracted from a country-wide database. The introduction of the modular codebook shows a significant improvement for all three sets, where the system is able to classify about 98% of the reliable results correctly.

  7. Real-Time Subject-Independent Pattern Classification of Overt and Covert Movements from fNIRS Signals

    PubMed Central

    Rana, Mohit; Prasad, Vinod A.; Guan, Cuntai; Birbaumer, Niels; Sitaram, Ranganatha

    2016-01-01

    Recently, studies have reported the use of Near Infrared Spectroscopy (NIRS) for developing Brain–Computer Interface (BCI) by applying online pattern classification of brain states from subject-specific fNIRS signals. The purpose of the present study was to develop and test a real-time method for subject-specific and subject-independent classification of multi-channel fNIRS signals using support-vector machines (SVM), so as to determine its feasibility as an online neurofeedback system. Towards this goal, we used left versus right hand movement execution and movement imagery as study paradigms in a series of experiments. In the first two experiments, activations in the motor cortex during movement execution and movement imagery were used to develop subject-dependent models that obtained high classification accuracies thereby indicating the robustness of our classification method. In the third experiment, a generalized classifier-model was developed from the first two experimental data, which was then applied for subject-independent neurofeedback training. Application of this method in new participants showed mean classification accuracy of 63% for movement imagery tasks and 80% for movement execution tasks. These results, and their corresponding offline analysis reported in this study demonstrate that SVM based real-time subject-independent classification of fNIRS signals is feasible. This method has important applications in the field of hemodynamic BCIs, and neuro-rehabilitation where patients can be trained to learn spatio-temporal patterns of healthy brain activity. PMID:27467528

  8. H∞ robust fault-tolerant controller design for an autonomous underwater vehicle's navigation control system

    NASA Astrophysics Data System (ADS)

    Cheng, Xiang-Qin; Qu, Jing-Yuan; Yan, Zhe-Ping; Bian, Xin-Qian

    2010-03-01

    In order to improve the security and reliability for autonomous underwater vehicle (AUV) navigation, an H∞ robust fault-tolerant controller was designed after analyzing variations in state-feedback gain. Operating conditions and the design method were then analyzed so that the control problem could be expressed as a mathematical optimization problem. This permitted the use of linear matrix inequalities (LMI) to solve for the H∞ controller for the system. When considering different actuator failures, these conditions were then also mathematically expressed, allowing the H∞ robust controller to solve for these events and thus be fault-tolerant. Finally, simulation results showed that the H∞ robust fault-tolerant controller could provide precise AUV navigation control with strong robustness.

  9. Ice Water Classification Using Statistical Distribution Based Conditional Random Fields in RADARSAT-2 Dual Polarization Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Li, F.; Zhang, S.; Hao, W.; Zhu, T.; Yuan, L.; Xiao, F.

    2017-09-01

    In this paper, Statistical Distribution based Conditional Random Fields (STA-CRF) algorithm is exploited for improving marginal ice-water classification. Pixel level ice concentration is presented as the comparison of methods based on CRF. Furthermore, in order to explore the effective statistical distribution model to be integrated into STA-CRF, five statistical distribution models are investigated. The STA-CRF methods are tested on 2 scenes around Prydz Bay and Adélie Depression, where contain a variety of ice types during melt season. Experimental results indicate that the proposed method can resolve sea ice edge well in Marginal Ice Zone (MIZ) and show a robust distinction of ice and water.

  10. Electro-optical seasonal weather and gender data collection

    NASA Astrophysics Data System (ADS)

    McCoppin, Ryan; Koester, Nathan; Rude, Howard N.; Rizki, Mateen; Tamburino, Louis; Freeman, Andrew; Mendoza-Schrock, Olga

    2013-05-01

    This paper describes the process used to collect the Seasonal Weather And Gender (SWAG) dataset; an electro-optical dataset of human subjects that can be used to develop advanced gender classification algorithms. Several novel features characterize this ongoing effort (1) the human subjects self-label their gender by performing a specific action during the data collection and (2) the data collection will span months and even years resulting in a dataset containing realistic levels and types of clothing corresponding to the various seasons and weather conditions. It is envisioned that this type of data will support the development and evaluation of more robust gender classification systems that are capable of accurate gender recognition under extended operating conditions.

  11. The wisdom of the commons: ensemble tree classifiers for prostate cancer prognosis.

    PubMed

    Koziol, James A; Feng, Anne C; Jia, Zhenyu; Wang, Yipeng; Goodison, Seven; McClelland, Michael; Mercola, Dan

    2009-01-01

    Classification and regression trees have long been used for cancer diagnosis and prognosis. Nevertheless, instability and variable selection bias, as well as overfitting, are well-known problems of tree-based methods. In this article, we investigate whether ensemble tree classifiers can ameliorate these difficulties, using data from two recent studies of radical prostatectomy in prostate cancer. Using time to progression following prostatectomy as the relevant clinical endpoint, we found that ensemble tree classifiers robustly and reproducibly identified three subgroups of patients in the two clinical datasets: non-progressors, early progressors and late progressors. Moreover, the consensus classifications were independent predictors of time to progression compared to known clinical prognostic factors.

  12. Automated sleep stage detection with a classical and a neural learning algorithm--methodological aspects.

    PubMed

    Schwaibold, M; Schöchlin, J; Bolz, A

    2002-01-01

    For classification tasks in biosignal processing, several strategies and algorithms can be used. Knowledge-based systems allow prior knowledge about the decision process to be integrated, both by the developer and by self-learning capabilities. For the classification stages in a sleep stage detection framework, three inference strategies were compared regarding their specific strengths: a classical signal processing approach, artificial neural networks and neuro-fuzzy systems. Methodological aspects were assessed to attain optimum performance and maximum transparency for the user. Due to their effective and robust learning behavior, artificial neural networks could be recommended for pattern recognition, while neuro-fuzzy systems performed best for the processing of contextual information.

  13. Bayesian classification theory

    NASA Technical Reports Server (NTRS)

    Hanson, Robin; Stutz, John; Cheeseman, Peter

    1991-01-01

    The task of inferring a set of classes and class descriptions most likely to explain a given data set can be placed on a firm theoretical foundation using Bayesian statistics. Within this framework and using various mathematical and algorithmic approximations, the AutoClass system searches for the most probable classifications, automatically choosing the number of classes and complexity of class descriptions. A simpler version of AutoClass has been applied to many large real data sets, has discovered new independently-verified phenomena, and has been released as a robust software package. Recent extensions allow attributes to be selectively correlated within particular classes, and allow classes to inherit or share model parameters though a class hierarchy. We summarize the mathematical foundations of AutoClass.

  14. Classification of large-sized hyperspectral imagery using fast machine learning algorithms

    NASA Astrophysics Data System (ADS)

    Xia, Junshi; Yokoya, Naoto; Iwasaki, Akira

    2017-07-01

    We present a framework of fast machine learning algorithms in the context of large-sized hyperspectral images classification from the theoretical to a practical viewpoint. In particular, we assess the performance of random forest (RF), rotation forest (RoF), and extreme learning machine (ELM) and the ensembles of RF and ELM. These classifiers are applied to two large-sized hyperspectral images and compared to the support vector machines. To give the quantitative analysis, we pay attention to comparing these methods when working with high input dimensions and a limited/sufficient training set. Moreover, other important issues such as the computational cost and robustness against the noise are also discussed.

  15. CNN for breaking text-based CAPTCHA with noise

    NASA Astrophysics Data System (ADS)

    Liu, Kaixuan; Zhang, Rong; Qing, Ke

    2017-07-01

    A CAPTCHA ("Completely Automated Public Turing test to tell Computers and Human Apart") system is a program that most humans can pass but current computer programs could hardly pass. As the most common type of CAPTCHAs , text-based CAPTCHA has been widely used in different websites to defense network bots. In order to breaking textbased CAPTCHA, in this paper, two trained CNN models are connected for the segmentation and classification of CAPTCHA images. Then base on these two models, we apply sliding window segmentation and voting classification methods realize an end-to-end CAPTCHA breaking system with high success rate. The experiment results show that our method is robust and effective in breaking text-based CAPTCHA with noise.

  16. Robust Transmission of H.264/AVC Streams Using Adaptive Group Slicing and Unequal Error Protection

    NASA Astrophysics Data System (ADS)

    Thomos, Nikolaos; Argyropoulos, Savvas; Boulgouris, Nikolaos V.; Strintzis, Michael G.

    2006-12-01

    We present a novel scheme for the transmission of H.264/AVC video streams over lossy packet networks. The proposed scheme exploits the error-resilient features of H.264/AVC codec and employs Reed-Solomon codes to protect effectively the streams. A novel technique for adaptive classification of macroblocks into three slice groups is also proposed. The optimal classification of macroblocks and the optimal channel rate allocation are achieved by iterating two interdependent steps. Dynamic programming techniques are used for the channel rate allocation process in order to reduce complexity. Simulations clearly demonstrate the superiority of the proposed method over other recent algorithms for transmission of H.264/AVC streams.

  17. Massive NGS Data Analysis Reveals Hundreds Of Potential Novel Gene Fusions in Human Cell Lines.

    PubMed

    Gioiosa, Silvia; Bolis, Marco; Flati, Tiziano; Massini, Annalisa; Garattini, Enrico; Chillemi, Giovanni; Fratelli, Maddalena; Castrignanò, Tiziana

    2018-06-01

    Gene fusions derive from chromosomal rearrangements and the resulting chimeric transcripts are often endowed with oncogenic potential. Furthermore, they serve as diagnostic tools for the clinical classification of cancer subgroups with different prognosis and, in some cases, they can provide specific drug targets. So far, many efforts have been carried out to study gene fusion events occurring in tumor samples. In recent years, the availability of a comprehensive Next Generation Sequencing dataset for all the existing human tumor cell lines has provided the opportunity to further investigate these data in order to identify novel and still uncharacterized gene fusion events. In our work, we have extensively reanalyzed 935 paired-end RNA-seq experiments downloaded from "The Cancer Cell Line Encyclopedia" repository, aiming at addressing novel putative cell-line specific gene fusion events in human malignancies. The bioinformatics analysis has been performed by the execution of four different gene fusion detection algorithms. The results have been further prioritized by running a bayesian classifier which makes an in silico validation. The collection of fusion events supported by all of the predictive softwares results in a robust set of ∼ 1,700 in-silico predicted novel candidates suitable for downstream analyses. Given the huge amount of data and information produced, computational results have been systematized in a database named LiGeA. The database can be browsed through a dynamical and interactive web portal, further integrated with validated data from other well known repositories. Taking advantage of the intuitive query forms, the users can easily access, navigate, filter and select the putative gene fusions for further validations and studies. They can also find suitable experimental models for a given fusion of interest. We believe that the LiGeA resource can represent not only the first compendium of both known and putative novel gene fusion events in the catalog of all of the human malignant cell lines, but it can also become a handy starting point for wet-lab biologists who wish to investigate novel cancer biomarkers and specific drug targets.

  18. Considerations in Phase Estimation and Event Location Using Small-aperture Regional Seismic Arrays

    NASA Astrophysics Data System (ADS)

    Gibbons, Steven J.; Kværna, Tormod; Ringdal, Frode

    2010-05-01

    The global monitoring of earthquakes and explosions at decreasing magnitudes necessitates the fully automatic detection, location and classification of an ever increasing number of seismic events. Many seismic stations of the International Monitoring System are small-aperture arrays designed to optimize the detection and measurement of regional phases. Collaboration with operators of mines within regional distances of the ARCES array, together with waveform correlation techniques, has provided an unparalleled opportunity to assess the ability of a small-aperture array to provide robust and accurate direction and slowness estimates for phase arrivals resulting from well-constrained events at sites of repeating seismicity. A significant reason for the inaccuracy of current fully-automatic event location estimates is the use of f- k slowness estimates measured in variable frequency bands. The variability of slowness and azimuth measurements for a given phase from a given source region is reduced by the application of almost any constant frequency band. However, the frequency band resulting in the most stable estimates varies greatly from site to site. Situations are observed in which regional P- arrivals from two sites, far closer than the theoretical resolution of the array, result in highly distinct populations in slowness space. This means that the f- k estimates, even at relatively low frequencies, can be sensitive to source and path-specific characteristics of the wavefield and should be treated with caution when inferring a geographical backazimuth under the assumption of a planar wavefront arriving along the great-circle path. Moreover, different frequency bands are associated with different biases meaning that slowness and azimuth station corrections (commonly denoted SASCs) cannot be calibrated, and should not be used, without reference to the frequency band employed. We demonstrate an example where fully-automatic locations based on a source-region specific fixed-parameter template are more stable than the corresponding analyst reviewed estimates. The reason is that the analyst selects a frequency band and analysis window which appears optimal for each event. In this case, the frequency band which produces the most consistent direction estimates has neither the best SNR or the greatest beam-gain, and is therefore unlikely to be chosen by an analyst without calibration data.

  19. A simple and robust classification tree for differentiation between benign and malignant lesions in MR-mammography.

    PubMed

    Baltzer, Pascal A T; Dietzel, Matthias; Kaiser, Werner A

    2013-08-01

    In the face of multiple available diagnostic criteria in MR-mammography (MRM), a practical algorithm for lesion classification is needed. Such an algorithm should be as simple as possible and include only important independent lesion features to differentiate benign from malignant lesions. This investigation aimed to develop a simple classification tree for differential diagnosis in MRM. A total of 1,084 lesions in standardised MRM with subsequent histological verification (648 malignant, 436 benign) were investigated. Seventeen lesion criteria were assessed by 2 readers in consensus. Classification analysis was performed using the chi-squared automatic interaction detection (CHAID) method. Results include the probability for malignancy for every descriptor combination in the classification tree. A classification tree incorporating 5 lesion descriptors with a depth of 3 ramifications (1, root sign; 2, delayed enhancement pattern; 3, border, internal enhancement and oedema) was calculated. Of all 1,084 lesions, 262 (40.4 %) and 106 (24.3 %) could be classified as malignant and benign with an accuracy above 95 %, respectively. Overall diagnostic accuracy was 88.4 %. The classification algorithm reduced the number of categorical descriptors from 17 to 5 (29.4 %), resulting in a high classification accuracy. More than one third of all lesions could be classified with accuracy above 95 %. • A practical algorithm has been developed to classify lesions found in MR-mammography. • A simple decision tree consisting of five criteria reaches high accuracy of 88.4 %. • Unique to this approach, each classification is associated with a diagnostic certainty. • Diagnostic certainty of greater than 95 % is achieved in 34 % of all cases.

  20. When mental fatigue maybe characterized by Event Related Potential (P300) during virtual wheelchair navigation.

    PubMed

    Lamti, Hachem A; Gorce, Philippe; Ben Khelifa, Mohamed Moncef; Alimi, Adel M

    2016-12-01

    The goal of this study is to investigate the influence of mental fatigue on the event related potential P300 features (maximum pick, minimum amplitude, latency and period) during virtual wheelchair navigation. For this purpose, an experimental environment was set up based on customizable environmental parameters (luminosity, number of obstacles and obstacles velocities). A correlation study between P300 and fatigue ratings was conducted. Finally, the best correlated features supplied three classification algorithms which are MLP (Multi Layer Perceptron), Linear Discriminate Analysis and Support Vector Machine. The results showed that the maximum feature over visual and temporal regions as well as period feature over frontal, fronto-central and visual regions were correlated with mental fatigue levels. In the other hand, minimum amplitude and latency features didn't show any correlation. Among classification techniques, MLP showed the best performance although the differences between classification techniques are minimal. Those findings can help us in order to design suitable mental fatigue based wheelchair control.

Top