Sample records for interference classification methods

  1. 7 CFR 27.59 - Postponed classification; interference.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... shall not be allowed to interfere with or delay the classification of other samples previously made... 7 Agriculture 2 2011-01-01 2011-01-01 false Postponed classification; interference. 27.59 Section... CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Postponed...

  2. 7 CFR 27.59 - Postponed classification; interference.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... shall not be allowed to interfere with or delay the classification of other samples previously made... 7 Agriculture 2 2010-01-01 2010-01-01 false Postponed classification; interference. 27.59 Section... CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Postponed...

  3. A new interferential multispectral image compression algorithm based on adaptive classification and curve-fitting

    NASA Astrophysics Data System (ADS)

    Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke

    2008-08-01

    A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.

  4. A Real-Time Interference Monitoring Technique for GNSS Based on a Twin Support Vector Machine Method.

    PubMed

    Li, Wutao; Huang, Zhigang; Lang, Rongling; Qin, Honglei; Zhou, Kai; Cao, Yongbin

    2016-03-04

    Interferences can severely degrade the performance of Global Navigation Satellite System (GNSS) receivers. As the first step of GNSS any anti-interference measures, interference monitoring for GNSS is extremely essential and necessary. Since interference monitoring can be considered as a classification problem, a real-time interference monitoring technique based on Twin Support Vector Machine (TWSVM) is proposed in this paper. A TWSVM model is established, and TWSVM is solved by the Least Squares Twin Support Vector Machine (LSTWSVM) algorithm. The interference monitoring indicators are analyzed to extract features from the interfered GNSS signals. The experimental results show that the chosen observations can be used as the interference monitoring indicators. The interference monitoring performance of the proposed method is verified by using GPS L1 C/A code signal and being compared with that of standard SVM. The experimental results indicate that the TWSVM-based interference monitoring is much faster than the conventional SVM. Furthermore, the training time of TWSVM is on millisecond (ms) level and the monitoring time is on microsecond (μs) level, which make the proposed approach usable in practical interference monitoring applications.

  5. A Real-Time Interference Monitoring Technique for GNSS Based on a Twin Support Vector Machine Method

    PubMed Central

    Li, Wutao; Huang, Zhigang; Lang, Rongling; Qin, Honglei; Zhou, Kai; Cao, Yongbin

    2016-01-01

    Interferences can severely degrade the performance of Global Navigation Satellite System (GNSS) receivers. As the first step of GNSS any anti-interference measures, interference monitoring for GNSS is extremely essential and necessary. Since interference monitoring can be considered as a classification problem, a real-time interference monitoring technique based on Twin Support Vector Machine (TWSVM) is proposed in this paper. A TWSVM model is established, and TWSVM is solved by the Least Squares Twin Support Vector Machine (LSTWSVM) algorithm. The interference monitoring indicators are analyzed to extract features from the interfered GNSS signals. The experimental results show that the chosen observations can be used as the interference monitoring indicators. The interference monitoring performance of the proposed method is verified by using GPS L1 C/A code signal and being compared with that of standard SVM. The experimental results indicate that the TWSVM-based interference monitoring is much faster than the conventional SVM. Furthermore, the training time of TWSVM is on millisecond (ms) level and the monitoring time is on microsecond (μs) level, which make the proposed approach usable in practical interference monitoring applications. PMID:26959020

  6. Processing of Antenna-Array Signals on the Basis of the Interference Model Including a Rank-Deficient Correlation Matrix

    NASA Astrophysics Data System (ADS)

    Rodionov, A. A.; Turchin, V. I.

    2017-06-01

    We propose a new method of signal processing in antenna arrays, which is called the Maximum-Likelihood Signal Classification. The proposed method is based on the model in which interference includes a component with a rank-deficient correlation matrix. Using numerical simulation, we show that the proposed method allows one to ensure variance of the estimated arrival angle of the plane wave, which is close to the Cramer-Rao lower boundary and more efficient than the best-known MUSIC method. It is also shown that the proposed technique can be efficiently used for estimating the time dependence of the useful signal.

  7. Adaptive phase k-means algorithm for waveform classification

    NASA Astrophysics Data System (ADS)

    Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin

    2018-01-01

    Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.

  8. Method of Grassland Information Extraction Based on Multi-Level Segmentation and Cart Model

    NASA Astrophysics Data System (ADS)

    Qiao, Y.; Chen, T.; He, J.; Wen, Q.; Liu, F.; Wang, Z.

    2018-04-01

    It is difficult to extract grassland accurately by traditional classification methods, such as supervised method based on pixels or objects. This paper proposed a new method combing the multi-level segmentation with CART (classification and regression tree) model. The multi-level segmentation which combined the multi-resolution segmentation and the spectral difference segmentation could avoid the over and insufficient segmentation seen in the single segmentation mode. The CART model was established based on the spectral characteristics and texture feature which were excavated from training sample data. Xilinhaote City in Inner Mongolia Autonomous Region was chosen as the typical study area and the proposed method was verified by using visual interpretation results as approximate truth value. Meanwhile, the comparison with the nearest neighbor supervised classification method was obtained. The experimental results showed that the total precision of classification and the Kappa coefficient of the proposed method was 95 % and 0.9, respectively. However, the total precision of classification and the Kappa coefficient of the nearest neighbor supervised classification method was 80 % and 0.56, respectively. The result suggested that the accuracy of classification proposed in this paper was higher than the nearest neighbor supervised classification method. The experiment certificated that the proposed method was an effective extraction method of grassland information, which could enhance the boundary of grassland classification and avoid the restriction of grassland distribution scale. This method was also applicable to the extraction of grassland information in other regions with complicated spatial features, which could avoid the interference of woodland, arable land and water body effectively.

  9. Sparsity-Based Representation for Classification Algorithms and Comparison Results for Transient Acoustic Signals

    DTIC Science & Technology

    2016-05-01

    large but correlated noise and signal interference (i.e., low -rank interference). Another contribution is the implementation of deep learning...representation, low rank, deep learning 52 Tung-Duong Tran-Luu 301-394-3082Unclassified Unclassified Unclassified UU ii Approved for public release; distribution...Classification of Acoustic Transients 6 3.2 Joint Sparse Representation with Low -Rank Interference 7 3.3 Simultaneous Group-and-Joint Sparse Representation

  10. A new ICA-based fingerprint method for the automatic removal of physiological artifacts from EEG recordings

    PubMed Central

    Tamburro, Gabriella; Fiedler, Patrique; Stone, David; Haueisen, Jens

    2018-01-01

    Background EEG may be affected by artefacts hindering the analysis of brain signals. Data-driven methods like independent component analysis (ICA) are successful approaches to remove artefacts from the EEG. However, the ICA-based methods developed so far are often affected by limitations, such as: the need for visual inspection of the separated independent components (subjectivity problem) and, in some cases, for the independent and simultaneous recording of the inspected artefacts to identify the artefactual independent components; a potentially heavy manipulation of the EEG signals; the use of linear classification methods; the use of simulated artefacts to validate the methods; no testing in dry electrode or high-density EEG datasets; applications limited to specific conditions and electrode layouts. Methods Our fingerprint method automatically identifies EEG ICs containing eyeblinks, eye movements, myogenic artefacts and cardiac interference by evaluating 14 temporal, spatial, spectral, and statistical features composing the IC fingerprint. Sixty-two real EEG datasets containing cued artefacts are recorded with wet and dry electrodes (128 wet and 97 dry channels). For each artefact, 10 nonlinear SVM classifiers are trained on fingerprints of expert-classified ICs. Training groups include randomly chosen wet and dry datasets decomposed in 80 ICs. The classifiers are tested on the IC-fingerprints of different datasets decomposed into 20, 50, or 80 ICs. The SVM performance is assessed in terms of accuracy, False Omission Rate (FOR), Hit Rate (HR), False Alarm Rate (FAR), and sensitivity (p). For each artefact, the quality of the artefact-free EEG reconstructed using the classification of the best SVM is assessed by visual inspection and SNR. Results The best SVM classifier for each artefact type achieved average accuracy of 1 (eyeblink), 0.98 (cardiac interference), and 0.97 (eye movement and myogenic artefact). Average classification sensitivity (p) was 1 (eyeblink), 0.997 (myogenic artefact), 0.98 (eye movement), and 0.48 (cardiac interference). Average artefact reduction ranged from a maximum of 82% for eyeblinks to a minimum of 33% for cardiac interference, depending on the effectiveness of the proposed method and the amplitude of the removed artefact. The performance of the SVM classifiers did not depend on the electrode type, whereas it was better for lower decomposition levels (50 and 20 ICs). Discussion Apart from cardiac interference, SVM performance and average artefact reduction indicate that the fingerprint method has an excellent overall performance in the automatic detection of eyeblinks, eye movements and myogenic artefacts, which is comparable to that of existing methods. Being also independent from simultaneous artefact recording, electrode number, type and layout, and decomposition level, the proposed fingerprint method can have useful applications in clinical and experimental EEG settings. PMID:29492336

  11. A new ICA-based fingerprint method for the automatic removal of physiological artifacts from EEG recordings.

    PubMed

    Tamburro, Gabriella; Fiedler, Patrique; Stone, David; Haueisen, Jens; Comani, Silvia

    2018-01-01

    EEG may be affected by artefacts hindering the analysis of brain signals. Data-driven methods like independent component analysis (ICA) are successful approaches to remove artefacts from the EEG. However, the ICA-based methods developed so far are often affected by limitations, such as: the need for visual inspection of the separated independent components (subjectivity problem) and, in some cases, for the independent and simultaneous recording of the inspected artefacts to identify the artefactual independent components; a potentially heavy manipulation of the EEG signals; the use of linear classification methods; the use of simulated artefacts to validate the methods; no testing in dry electrode or high-density EEG datasets; applications limited to specific conditions and electrode layouts. Our fingerprint method automatically identifies EEG ICs containing eyeblinks, eye movements, myogenic artefacts and cardiac interference by evaluating 14 temporal, spatial, spectral, and statistical features composing the IC fingerprint. Sixty-two real EEG datasets containing cued artefacts are recorded with wet and dry electrodes (128 wet and 97 dry channels). For each artefact, 10 nonlinear SVM classifiers are trained on fingerprints of expert-classified ICs. Training groups include randomly chosen wet and dry datasets decomposed in 80 ICs. The classifiers are tested on the IC-fingerprints of different datasets decomposed into 20, 50, or 80 ICs. The SVM performance is assessed in terms of accuracy, False Omission Rate (FOR), Hit Rate (HR), False Alarm Rate (FAR), and sensitivity ( p ). For each artefact, the quality of the artefact-free EEG reconstructed using the classification of the best SVM is assessed by visual inspection and SNR. The best SVM classifier for each artefact type achieved average accuracy of 1 (eyeblink), 0.98 (cardiac interference), and 0.97 (eye movement and myogenic artefact). Average classification sensitivity (p) was 1 (eyeblink), 0.997 (myogenic artefact), 0.98 (eye movement), and 0.48 (cardiac interference). Average artefact reduction ranged from a maximum of 82% for eyeblinks to a minimum of 33% for cardiac interference, depending on the effectiveness of the proposed method and the amplitude of the removed artefact. The performance of the SVM classifiers did not depend on the electrode type, whereas it was better for lower decomposition levels (50 and 20 ICs). Apart from cardiac interference, SVM performance and average artefact reduction indicate that the fingerprint method has an excellent overall performance in the automatic detection of eyeblinks, eye movements and myogenic artefacts, which is comparable to that of existing methods. Being also independent from simultaneous artefact recording, electrode number, type and layout, and decomposition level, the proposed fingerprint method can have useful applications in clinical and experimental EEG settings.

  12. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System.

    PubMed

    de Moura, Karina de O A; Balbinot, Alexandre

    2018-05-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior.

  13. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System

    PubMed Central

    Balbinot, Alexandre

    2018-01-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method recovered the classification accuracy after the degradations, reaching an average of 5.7% below the classification of the clean signal, that is the signal without the contaminants or the original signal. Moreover, the proposed intelligent technique minimizes the impact of the motion classification caused by signal contamination related to degrading events over time. There are improvements in the virtual sensor model and in the algorithm optimization that need further development to provide an increase the clinical application of myoelectric prostheses but already presents robust results to enable research with virtual sensors on biological signs with stochastic behavior. PMID:29723994

  14. Bacterial cell identification in differential interference contrast microscopy images.

    PubMed

    Obara, Boguslaw; Roberts, Mark A J; Armitage, Judith P; Grau, Vicente

    2013-04-23

    Microscopy image segmentation lays the foundation for shape analysis, motion tracking, and classification of biological objects. Despite its importance, automated segmentation remains challenging for several widely used non-fluorescence, interference-based microscopy imaging modalities. For example in differential interference contrast microscopy which plays an important role in modern bacterial cell biology. Therefore, new revolutions in the field require the development of tools, technologies and work-flows to extract and exploit information from interference-based imaging data so as to achieve new fundamental biological insights and understanding. We have developed and evaluated a high-throughput image analysis and processing approach to detect and characterize bacterial cells and chemotaxis proteins. Its performance was evaluated using differential interference contrast and fluorescence microscopy images of Rhodobacter sphaeroides. Results demonstrate that the proposed approach provides a fast and robust method for detection and analysis of spatial relationship between bacterial cells and their chemotaxis proteins.

  15. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System.

    PubMed

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-10-20

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.

  16. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System

    PubMed Central

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-01-01

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias. PMID:27775596

  17. Combined target factor analysis and Bayesian soft-classification of interference-contaminated samples: forensic fire debris analysis.

    PubMed

    Williams, Mary R; Sigman, Michael E; Lewis, Jennifer; Pitan, Kelly McHugh

    2012-10-10

    A bayesian soft classification method combined with target factor analysis (TFA) is described and tested for the analysis of fire debris data. The method relies on analysis of the average mass spectrum across the chromatographic profile (i.e., the total ion spectrum, TIS) from multiple samples taken from a single fire scene. A library of TIS from reference ignitable liquids with assigned ASTM classification is used as the target factors in TFA. The class-conditional distributions of correlations between the target and predicted factors for each ASTM class are represented by kernel functions and analyzed by bayesian decision theory. The soft classification approach assists in assessing the probability that ignitable liquid residue from a specific ASTM E1618 class, is present in a set of samples from a single fire scene, even in the presence of unspecified background contributions from pyrolysis products. The method is demonstrated with sample data sets and then tested on laboratory-scale burn data and large-scale field test burns. The overall performance achieved in laboratory and field test of the method is approximately 80% correct classification of fire debris samples. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  18. Dialect Interference in Lexical Processing: Effects of Familiarity and Social Stereotypes.

    PubMed

    Clopper, Cynthia

    2017-01-01

    The current study explored the roles of dialect familiarityand social stereotypes in dialect interference effects in a speeded lexical classification task. Listeners classified the words bad and bed or had and head produced by local Midland and non-local Northern talkers and the words sod and side or rod and ride produced by non-local, non-stereotyped Northern and nonlocal, stereotyped Southern talkers in single- and mixed-talker blocks. Lexical classification was better for the local dialect than for the non-local dialects, and for the stereotyped non-local dialect than for the non-stereotyped non-local dialect. Dialect interference effects were observed for all three dialects, although the patterns of interference differed. For the local dialect, dialect interference was observed for response times, whereas for the non-local dialects, dialect interference was observed primarily for accuracy. These findings reveal complex interactions between indexical and lexical information in speech processing. © 2016 S. Karger AG, Basel.

  19. A Robust Real Time Direction-of-Arrival Estimation Method for Sequential Movement Events of Vehicles.

    PubMed

    Liu, Huawei; Li, Baoqing; Yuan, Xiaobing; Zhou, Qianwei; Huang, Jingchang

    2018-03-27

    Parameters estimation of sequential movement events of vehicles is facing the challenges of noise interferences and the demands of portable implementation. In this paper, we propose a robust direction-of-arrival (DOA) estimation method for the sequential movement events of vehicles based on a small Micro-Electro-Mechanical System (MEMS) microphone array system. Inspired by the incoherent signal-subspace method (ISM), the method that is proposed in this work employs multiple sub-bands, which are selected from the wideband signals with high magnitude-squared coherence to track moving vehicles in the presence of wind noise. The field test results demonstrate that the proposed method has a better performance in emulating the DOA of a moving vehicle even in the case of severe wind interference than the narrowband multiple signal classification (MUSIC) method, the sub-band DOA estimation method, and the classical two-sided correlation transformation (TCT) method.

  20. Quasiparticle Interference Studies of Quantum Materials.

    PubMed

    Avraham, Nurit; Reiner, Jonathan; Kumar-Nayak, Abhay; Morali, Noam; Batabyal, Rajib; Yan, Binghai; Beidenkopf, Haim

    2018-06-03

    Exotic electronic states are realized in novel quantum materials. This field is revolutionized by the topological classification of materials. Such compounds necessarily host unique states on their boundaries. Scanning tunneling microscopy studies of these surface states have provided a wealth of spectroscopic characterization, with the successful cooperation of ab initio calculations. The method of quasiparticle interference imaging proves to be particularly useful for probing the dispersion relation of the surface bands. Herein, how a variety of additional fundamental electronic properties can be probed via this method is reviewed. It is demonstrated how quasiparticle interference measurements entail mesoscopic size quantization and the electronic phase coherence in semiconducting nanowires; helical spin protection and energy-momentum fluctuations in a topological insulator; and the structure of the Bloch wave function and the relative insusceptibility of topological electronic states to surface potential in a topological Weyl semimetal. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. [Classification of Children with Attention-Deficit/Hyperactivity Disorder and Typically Developing Children Based on Electroencephalogram Principal Component Analysis and k-Nearest Neighbor].

    PubMed

    Yang, Jiaojiao; Guo, Qian; Li, Wenjie; Wang, Suhong; Zou, Ling

    2016-04-01

    This paper aims to assist the individual clinical diagnosis of children with attention-deficit/hyperactivity disorder using electroencephalogram signal detection method.Firstly,in our experiments,we obtained and studied the electroencephalogram signals from fourteen attention-deficit/hyperactivity disorder children and sixteen typically developing children during the classic interference control task of Simon-spatial Stroop,and we completed electroencephalogram data preprocessing including filtering,segmentation,removal of artifacts and so on.Secondly,we selected the subset electroencephalogram electrodes using principal component analysis(PCA)method,and we collected the common channels of the optimal electrodes which occurrence rates were more than 90%in each kind of stimulation.We then extracted the latency(200~450ms)mean amplitude features of the common electrodes.Finally,we used the k-nearest neighbor(KNN)classifier based on Euclidean distance and the support vector machine(SVM)classifier based on radial basis kernel function to classify.From the experiment,at the same kind of interference control task,the attention-deficit/hyperactivity disorder children showed lower correct response rates and longer reaction time.The N2 emerged in prefrontal cortex while P2 presented in the inferior parietal area when all kinds of stimuli demonstrated.Meanwhile,the children with attention-deficit/hyperactivity disorder exhibited markedly reduced N2 and P2amplitude compared to typically developing children.KNN resulted in better classification accuracy than SVM classifier,and the best classification rate was 89.29%in StI task.The results showed that the electroencephalogram signals were different in the brain regions of prefrontal cortex and inferior parietal cortex between attention-deficit/hyperactivity disorder and typically developing children during the interference control task,which provided a scientific basis for the clinical diagnosis of attention-deficit/hyperactivity disorder individuals.

  2. 76 FR 44489 - Medical Devices; Neurological Devices; Classification of Repetitive Transcranial Magnetic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-26

    ...; Hazards caused by electromagnetic interference and electrostatic discharge hazards; and Hearing loss. FDA... electromagnetic Electromagnetic compatibility. interference and electrostatic discharge hazards. Labeling. Hearing...

  3. Classification of Partial Discharge Signals by Combining Adaptive Local Iterative Filtering and Entropy Features

    PubMed Central

    Morison, Gordon; Boreham, Philip

    2018-01-01

    Electromagnetic Interference (EMI) is a technique for capturing Partial Discharge (PD) signals in High-Voltage (HV) power plant apparatus. EMI signals can be non-stationary which makes their analysis difficult, particularly for pattern recognition applications. This paper elaborates upon a previously developed software condition-monitoring model for improved EMI events classification based on time-frequency signal decomposition and entropy features. The idea of the proposed method is to map multiple discharge source signals captured by EMI and labelled by experts, including PD, from the time domain to a feature space, which aids in the interpretation of subsequent fault information. Here, instead of using only one permutation entropy measure, a more robust measure, called Dispersion Entropy (DE), is added to the feature vector. Multi-Class Support Vector Machine (MCSVM) methods are utilized for classification of the different discharge sources. Results show an improved classification accuracy compared to previously proposed methods. This yields to a successful development of an expert’s knowledge-based intelligent system. Since this method is demonstrated to be successful with real field data, it brings the benefit of possible real-world application for EMI condition monitoring. PMID:29385030

  4. An improved SRC method based on virtual samples for face recognition

    NASA Astrophysics Data System (ADS)

    Fu, Lijun; Chen, Deyun; Lin, Kezheng; Li, Ao

    2018-07-01

    The sparse representation classifier (SRC) performs classification by evaluating which class leads to the minimum representation error. However, in real world, the number of available training samples is limited due to noise interference, training samples cannot accurately represent the test sample linearly. Therefore, in this paper, we first produce virtual samples by exploiting original training samples at the aim of increasing the number of training samples. Then, we take the intra-class difference as data representation of partial noise, and utilize the intra-class differences and training samples simultaneously to represent the test sample in a linear way according to the theory of SRC algorithm. Using weighted score level fusion, the respective representation scores of the virtual samples and the original training samples are fused together to obtain the final classification results. The experimental results on multiple face databases show that our proposed method has a very satisfactory classification performance.

  5. Effect of the atmosphere on the classification of LANDSAT data. [Identifying sugar canes in Brazil

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Morimoto, T.; Kumar, R.; Molion, L. C. B.

    1979-01-01

    The author has identified the following significant results. In conjunction with Turner's model for the correction of satellite data for atmospheric interference, the LOWTRAN-3 computer was used to calculate the atmospheric interference. Use of the program improved the contrast between different natural targets in the MSS LANDSAT data of Brasilia, Brazil. The classification accuracy of sugar canes was improved by about 9% in the multispectral data of Ribeirao Preto, Sao Paulo.

  6. Research on Remote Sensing Geological Information Extraction Based on Object Oriented Classification

    NASA Astrophysics Data System (ADS)

    Gao, Hui

    2018-04-01

    The northern Tibet belongs to the Sub cold arid climate zone in the plateau. It is rarely visited by people. The geological working conditions are very poor. However, the stratum exposures are good and human interference is very small. Therefore, the research on the automatic classification and extraction of remote sensing geological information has typical significance and good application prospect. Based on the object-oriented classification in Northern Tibet, using the Worldview2 high-resolution remote sensing data, combined with the tectonic information and image enhancement, the lithological spectral features, shape features, spatial locations and topological relations of various geological information are excavated. By setting the threshold, based on the hierarchical classification, eight kinds of geological information were classified and extracted. Compared with the existing geological maps, the accuracy analysis shows that the overall accuracy reached 87.8561 %, indicating that the classification-oriented method is effective and feasible for this study area and provides a new idea for the automatic extraction of remote sensing geological information.

  7. Micro-Raman spectroscopy of natural and synthetic indigo samples.

    PubMed

    Vandenabeele, Peter; Moens, Luc

    2003-02-01

    In this work indigo samples from three different sources are studied by using Raman spectroscopy: the synthetic pigment and pigments from the woad (Isatis tinctoria) and the indigo plant (Indigofera tinctoria). 21 samples were obtained from 8 suppliers; for each sample 5 Raman spectra were recorded and used for further chemometrical analysis. Principal components analysis (PCA) was performed as data reduction method before applying hierarchical cluster analysis. Linear discriminant analysis (LDA) was implemented as a non-hierarchical supervised pattern recognition method to build a classification model. In order to avoid broad-shaped interferences from the fluorescence background, the influence of 1st and 2nd derivatives on the classification was studied by using cross-validation. Although chemically identical, it is shown that Raman spectroscopy in combination with suitable chemometric methods has the potential to discriminate between synthetic and natural indigo samples.

  8. Weak scratch detection and defect classification methods for a large-aperture optical element

    NASA Astrophysics Data System (ADS)

    Tao, Xian; Xu, De; Zhang, Zheng-Tao; Zhang, Feng; Liu, Xi-Long; Zhang, Da-Peng

    2017-03-01

    Surface defects on optics cause optic failure and heavy loss to the optical system. Therefore, surface defects on optics must be carefully inspected. This paper proposes a coarse-to-fine detection strategy of weak scratches in complicated dark-field images. First, all possible scratches are detected based on bionic vision. Then, each possible scratch is precisely positioned and connected to a complete scratch by the LSD and a priori knowledge. Finally, multiple scratches with various types can be detected in dark-field images. To classify defects and pollutants, a classification method based on GIST features is proposed. This paper uses many real dark-field images as experimental images. The results show that this method can detect multiple types of weak scratches in complex images and that the defects can be correctly distinguished with interference. This method satisfies the real-time and accurate detection requirements of surface defects.

  9. A Retrospective Examination of Feline Leukemia Subgroup Characterization: Viral Interference Assays to Deep Sequencing.

    PubMed

    Chiu, Elliott S; Hoover, Edward A; VandeWoude, Sue

    2018-01-10

    Feline leukemia virus (FeLV) was the first feline retrovirus discovered, and is associated with multiple fatal disease syndromes in cats, including lymphoma. The original research conducted on FeLV employed classical virological techniques. As methods have evolved to allow FeLV genetic characterization, investigators have continued to unravel the molecular pathology associated with this fascinating agent. In this review, we discuss how FeLV classification, transmission, and disease-inducing potential have been defined sequentially by viral interference assays, Sanger sequencing, PCR, and next-generation sequencing. In particular, we highlight the influences of endogenous FeLV and host genetics that represent FeLV research opportunities on the near horizon.

  10. Memory Reactivation Predicts Resistance to Retroactive Interference: Evidence from Multivariate Classification and Pattern Similarity Analyses

    PubMed Central

    Rugg, Michael D.

    2016-01-01

    Memory reactivation—the reinstatement of processes and representations engaged when an event is initially experienced—is believed to play an important role in strengthening and updating episodic memory. The present study examines how memory reactivation during a potentially interfering event influences memory for a previously experienced event. Participants underwent fMRI during the encoding phase of an AB/AC interference task in which some words were presented twice in association with two different encoding tasks (AB and AC trials) and other words were presented once (DE trials). The later memory test required retrieval of the encoding tasks associated with each of the study words. Retroactive interference was evident for the AB encoding task and was particularly strong when the AC encoding task was remembered rather than forgotten. We used multivariate classification and pattern similarity analysis (PSA) to measure reactivation of the AB encoding task during AC trials. The results demonstrated that reactivation of generic task information measured with multivariate classification predicted subsequent memory for the AB encoding task regardless of whether interference was strong and weak (trials for which the AC encoding task was remembered or forgotten, respectively). In contrast, reactivation of neural patterns idiosyncratic to a given AB trial measured with PSA only predicted memory when the strength of interference was low. These results suggest that reactivation of features of an initial experience shared across numerous events in the same category, but not features idiosyncratic to a particular event, are important in resisting retroactive interference caused by new learning. SIGNIFICANCE STATEMENT Reactivating a previously encoded memory is believed to provide an opportunity to strengthen the memory, but also to return the memory to a labile state, making it susceptible to interference. However, there is debate as to how memory reactivation elicited by a potentially interfering event influences subsequent retrieval of the memory. The findings of the current study indicate that reactivating features idiosyncratic to a particular experience during interference only influences subsequent memory when interference is relatively weak. Critically, reactivation of generic contextual information predicts subsequent source memory when retroactive interference is either strong and weak. The results indicate that reactivation of generic information about a prior episode mitigates forgetting due to retroactive interference. PMID:27076433

  11. Forms of History Textbook Interference

    ERIC Educational Resources Information Center

    Blaauw, Jan

    2017-01-01

    History textbooks in contemporary democracies have often been exposed to censorship and other forms of interference. This article presents the idea of a classification of these forms as a novel way to contemplate the ambivalent relationship between democratic authority and historical instruction. The model primarily distinguishes official forms of…

  12. Differential development of retroactive and proactive interference during post-learning wakefulness.

    PubMed

    Brawn, Timothy P; Nusbaum, Howard C; Margoliash, Daniel

    2018-07-01

    Newly encoded, labile memories are prone to disruption during post-learning wakefulness. Here we examine the contributions of retroactive and proactive interference to daytime forgetting on an auditory classification task in a songbird. While both types of interference impair performance, they do not develop concurrently. The retroactive interference of task-B on task-A developed during the learning of task-B, whereas the proactive interference of task-A on task-B emerged during subsequent waking retention. These different time courses indicate an asymmetry in the emergence of retroactive and proactive interference and suggest a mechanistic framework for how different types of interference between new memories develop. © 2018 Brawn et al.; Published by Cold Spring Harbor Laboratory Press.

  13. Resolving Semantic Interference during Word Production Requires Central Attention

    ERIC Educational Resources Information Center

    Kleinman, Daniel

    2013-01-01

    The semantic picture-word interference task has been used to diagnose how speakers resolve competition while selecting words for production. The attentional demands of this resolution process were assessed in 2 dual-task experiments (tone classification followed by picture naming). In Experiment 1, when pictures and distractor words were presented…

  14. CBM Resources/reserves classification and evaluation based on PRMS rules

    NASA Astrophysics Data System (ADS)

    Fa, Guifang; Yuan, Ruie; Wang, Zuoqian; Lan, Jun; Zhao, Jian; Xia, Mingjun; Cai, Dechao; Yi, Yanjing

    2018-02-01

    This paper introduces a set of definitions and classification requirements for coalbed methane (CBM) resources/reserves, based on Petroleum Resources Management System (PRMS). The basic CBM classification criterions of 1P, 2P, 3P and contingent resources are put forward from the following aspects: ownership, project maturity, drilling requirements, testing requirements, economic requirements, infrastructure and market, timing of production and development, and so on. The volumetric method is used to evaluate the OGIP, with focuses on analyses of key parameters and principles of the parameter selection, such as net thickness, ash and water content, coal rank and composition, coal density, cleat volume and saturation and absorbed gas content etc. A dynamic method is used to assess the reserves and recovery efficiency. Since the differences in rock and fluid properties, displacement mechanism, completion and operating practices and wellbore type resulted in different production curve characteristics, the factors affecting production behavior, the dewatering period, pressure build-up and interference effects were analyzed. The conclusion and results that the paper achieved can be used as important references for reasonable assessment of CBM resources/reserves.

  15. Determination of 18 kinds of trace impurities in the vanadium battery grade vanadyl sulfate by ICP-OES

    NASA Astrophysics Data System (ADS)

    Yong, Cheng

    2018-03-01

    The method that direct determination of 18 kinds of trace impurities in the vanadium battery grade vanadyl sulfate by inductively coupled plasma atomic emission spectrometry (ICP-OES) was established, and the detection range includes 0.001% ∼ 0.100% of Fe, Cr, Ni, Cu, Mn, Mo, Pb, As, Co, P, Ti, Zn and 0.005% ∼ 0.100% of K, Na, Ca, Mg, Si, Al. That the influence of the matrix effects, spectral interferences and background continuum superposition in the high concentrations of vanadium ions and sulfate coexistence system had been studied, and then the following conclusions were obtained: the sulfate at this concentration had no effect on the determination, but the matrix effects or continuous background superposition which were generated by high concentration of vanadium ions had negative interference on the determination of potassium and sodium, and it produced a positive interference on the determination of the iron and other impurity elements, so that the impacts of high vanadium matrix were eliminated by the matrix matching and combining synchronous background correction measures. Through the spectral interference test, the paper classification summarized the spectral interferences of vanadium matrix and between the impurity elements, and the analytical lines, the background correction regions and working parameters of the spectrometer were all optimized. The technical performance index of the analysis method is that the background equivalent concentration -0.0003%(Na)~0.0004%(Cu), the detection limit of the element is 0.0001%∼ 0.0003%, RSD<10% when the element content is in the range from 0.001% to 0.007%, RSD< 20% even if the element content is in the range from 0.0001% to 0.001% that is beyond the scope of the method of detection, recoveries is 91.0% ∼ 110.0%.

  16. Efficient source separation algorithms for acoustic fall detection using a microsoft kinect.

    PubMed

    Li, Yun; Ho, K C; Popescu, Mihail

    2014-03-01

    Falls have become a common health problem among older adults. In previous study, we proposed an acoustic fall detection system (acoustic FADE) that employed a microphone array and beamforming to provide automatic fall detection. However, the previous acoustic FADE had difficulties in detecting the fall signal in environments where interference comes from the fall direction, the number of interferences exceeds FADE's ability to handle or a fall is occluded. To address these issues, in this paper, we propose two blind source separation (BSS) methods for extracting the fall signal out of the interferences to improve the fall classification task. We first propose the single-channel BSS by using nonnegative matrix factorization (NMF) to automatically decompose the mixture into a linear combination of several basis components. Based on the distinct patterns of the bases of falls, we identify them efficiently and then construct the interference free fall signal. Next, we extend the single-channel BSS to the multichannel case through a joint NMF over all channels followed by a delay-and-sum beamformer for additional ambient noise reduction. In our experiments, we used the Microsoft Kinect to collect the acoustic data in real-home environments. The results show that in environments with high interference and background noise levels, the fall detection performance is significantly improved using the proposed BSS approaches.

  17. Implicit and Explicit Number-Space Associations Differentially Relate to Interference Control in Young Adults With ADHD

    PubMed Central

    Georges, Carrie; Hoffmann, Danielle; Schiltz, Christine

    2018-01-01

    Behavioral evidence for the link between numerical and spatial representations comes from the spatial-numerical association of response codes (SNARC) effect, consisting in faster reaction times to small/large numbers with the left/right hand respectively. The SNARC effect is, however, characterized by considerable intra- and inter-individual variability. It depends not only on the explicit or implicit nature of the numerical task, but also relates to interference control. To determine whether the prevalence of the latter relation in the elderly could be ascribed to younger individuals’ ceiling performances on executive control tasks, we determined whether the SNARC effect related to Stroop and/or Flanker effects in 26 young adults with ADHD. We observed a divergent pattern of correlation depending on the type of numerical task used to assess the SNARC effect and the type of interference control measure involved in number-space associations. Namely, stronger number-space associations during parity judgments involving implicit magnitude processing related to weaker interference control in the Stroop but not Flanker task. Conversely, stronger number-space associations during explicit magnitude classifications tended to be associated with better interference control in the Flanker but not Stroop paradigm. The association of stronger parity and magnitude SNARC effects with weaker and better interference control respectively indicates that different mechanisms underlie these relations. Activation of the magnitude-associated spatial code is irrelevant and potentially interferes with parity judgments, but in contrast assists explicit magnitude classifications. Altogether, the present study confirms the contribution of interference control to number-space associations also in young adults. It suggests that magnitude-associated spatial codes in implicit and explicit tasks are monitored by different interference control mechanisms, thereby explaining task-related intra-individual differences in number-space associations. PMID:29881363

  18. Cognitive-Behavioral Classifications of Chronic Pain in Patients with Multiple Sclerosis

    ERIC Educational Resources Information Center

    Khan, Fary; Pallant, Julie F.; Amatya, Bhasker; Young, Kevin; Gibson, Steven

    2011-01-01

    The aim of this study was to replicate, in patients with multiple sclerosis (MS), the three-cluster cognitive-behavioral classification proposed by Turk and Rudy. Sixty-two patients attending a tertiary MS rehabilitation center completed the Pain Impact Rating questionnaire measuring activity interference, pain intensity, social support, and…

  19. Conditions for quantum interference in cognitive sciences.

    PubMed

    Yukalov, Vyacheslav I; Sornette, Didier

    2014-01-01

    We present a general classification of the conditions under which cognitive science, concerned, e.g. with decision making, requires the use of quantum theoretical notions. The analysis is done in the frame of the mathematical approach based on the theory of quantum measurements. We stress that quantum effects in cognition can arise only when decisions are made under uncertainty. Conditions for the appearance of quantum interference in cognitive sciences and the conditions when interference cannot arise are formulated. Copyright © 2013 Cognitive Science Society, Inc.

  20. LISA Framework for Enhancing Gravitational Wave Signal Extraction Techniques

    NASA Technical Reports Server (NTRS)

    Thompson, David E.; Thirumalainambi, Rajkumar

    2006-01-01

    This paper describes the development of a Framework for benchmarking and comparing signal-extraction and noise-interference-removal methods that are applicable to interferometric Gravitational Wave detector systems. The primary use is towards comparing signal and noise extraction techniques at LISA frequencies from multiple (possibly confused) ,gravitational wave sources. The Framework includes extensive hybrid learning/classification algorithms, as well as post-processing regularization methods, and is based on a unique plug-and-play (component) architecture. Published methods for signal extraction and interference removal at LISA Frequencies are being encoded, as well as multiple source noise models, so that the stiffness of GW Sensitivity Space can be explored under each combination of methods. Furthermore, synthetic datasets and source models can be created and imported into the Framework, and specific degraded numerical experiments can be run to test the flexibility of the analysis methods. The Framework also supports use of full current LISA Testbeds, Synthetic data systems, and Simulators already in existence through plug-ins and wrappers, thus preserving those legacy codes and systems in tact. Because of the component-based architecture, all selected procedures can be registered or de-registered at run-time, and are completely reusable, reconfigurable, and modular.

  1. An Evaluation of Deficits in Semantic Cuing, Proactive and Retroactive Interference as Early Features of Alzheimer’s disease

    PubMed Central

    Crocco, Elizabeth; Curiel, Rosie E.; Acevedo, Amarilis; Czaja, Sara J.; Loewenstein, David A.

    2015-01-01

    OBJECTIVE To determine the degree to which susceptibility to different types of semantic interference may reflect the earliest manifestations of early Alzheimer disease (AD) beyond the effects of global memory impairment. METHODS Normal elderly (NE) subjects (n= 47), subjects with amnestic mild cognitive impairment (aMCI: n=34) and 40 subjects with probable AD were evaluated using a unique cued recall paradigm that allowed for an evaluation of both proactive and retroactive interference effects while controlling for global memory impairment (LASSI-L procedure). RESULTS Controlling for overall memory impairment, aMCI subjects had much greater proactive and retroactive interference effects than NE subjects. LASSI-L indices of learning using cued recall evidenced high levels of sensitivity and specificity with an overall correct classification rate of 90%. These provided better discrimination than traditional neuropsychological measures of memory function. CONCLUSION The LASSI-L paradigm is unique and unlike other assessments of memory in that items presented for cued recall are explicitly presented, and semantic interference and cuing effects can be assessed while controlling for initial level of memory impairment. This represents a powerful procedure allowing the participant to serve as his or her own control. The high levels of discrimination between subjects with aMCI and normal cognition that exceeded traditional neuropsychological measures makes the LASSI-L worthy of further research in the detection of early AD. PMID:23768680

  2. Detection and severity classification of extracardiac interference in {sup 82}Rb PET myocardial perfusion imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orton, Elizabeth J., E-mail: eorton@physics.carleton.ca; Kemp, Robert A. de; Glenn Wells, R.

    2014-10-15

    Purpose: Myocardial perfusion imaging (MPI) is used for diagnosis and prognosis of coronary artery disease. When MPI studies are performed with positron emission tomography (PET) and the radioactive tracer rubidium-82 chloride ({sup 82}Rb), a small but non-negligible fraction of studies (∼10%) suffer from extracardiac interference: high levels of tracer uptake in structures adjacent to the heart which mask the true cardiac tracer uptake. At present, there are no clinically available options for automated detection or correction of this problem. This work presents an algorithm that detects and classifies the severity of extracardiac interference in {sup 82}Rb PET MPI images andmore » reports the accuracy and failure rate of the method. Methods: A set of 200 {sup 82}Rb PET MPI images were reviewed by a trained nuclear cardiologist and interference severity reported on a four-class scale, from absent to severe. An automated algorithm was developed that compares uptake at the external border of the myocardium to three thresholds, separating the four interference severity classes. A minimum area of interference was required, and the search region was limited to that facing the stomach wall and spleen. Maximizing concordance (Cohen’s Kappa) and minimizing failure rate for the set of 200 clinician-read images were used to find the optimal population-based constants defining search limit and minimum area parameters and the thresholds for the algorithm. Tenfold stratified cross-validation was used to find optimal thresholds and report accuracy measures (sensitivity, specificity, and Kappa). Results: The algorithm was capable of detecting interference with a mean [95% confidence interval] sensitivity/specificity/Kappa of 0.97 [0.94, 1.00]/0.82 [0.66, 0.98]/0.79 [0.65, 0.92], and a failure rate of 1.0% ± 0.2%. The four-class overall Kappa was 0.72 [0.64, 0.81]. Separation of mild versus moderate-or-greater interference was performed with good accuracy (sensitivity/specificity/Kappa = 0.92 [0.86, 0.99]/0.86 [0.71, 1.00]/0.78 [0.64, 0.92]), while separation of moderate versus severe interference severity classes showed reduced sensitivity/Kappa but little change in specificity (sensitivity/specificity/Kappa = 0.83 [0.77, 0.88]/0.82 [0.77, 0.88]/0.65 [0.60, 0.70]). Specificity was greater than sensitivity for all interference classes. Algorithm execution time was <1 min. Conclusions: The algorithm produced here has a low failure rate and high accuracy for detection of extracardiac interference in {sup 82}Rb PET MPI scans. It provides a fast, reliable, automated method for assessing severity of extracardiac interference.« less

  3. Iris double recognition based on modified evolutionary neural network

    NASA Astrophysics Data System (ADS)

    Liu, Shuai; Liu, Yuan-Ning; Zhu, Xiao-Dong; Huo, Guang; Liu, Wen-Tao; Feng, Jia-Kai

    2017-11-01

    Aiming at multicategory iris recognition under illumination and noise interference, this paper proposes a method of iris double recognition based on a modified evolutionary neural network. An equalization histogram and Laplace of Gaussian operator are used to process the iris to suppress illumination and noise interference and Haar wavelet to convert the iris feature to binary feature encoding. Calculate the Hamming distance for the test iris and template iris , and compare with classification threshold, determine the type of iris. If the iris cannot be identified as a different type, there needs to be a secondary recognition. The connection weights in back-propagation (BP) neural network use modified evolutionary neural network to adaptively train. The modified neural network is composed of particle swarm optimization with mutation operator and BP neural network. According to different iris libraries in different circumstances of experimental results, under illumination and noise interference, the correct recognition rate of this algorithm is higher, the ROC curve is closer to the coordinate axis, the training and recognition time is shorter, and the stability and the robustness are better.

  4. Typology of patients with fibromyalgia: cluster analysis of duloxetine study patients.

    PubMed

    Lipkovich, Ilya A; Choy, Ernest H; Van Wambeke, Peter; Deberdt, Walter; Sagman, Doron

    2014-12-23

    To identify distinct groups of patients with fibromyalgia (FM) with respect to multiple outcome measures. Data from 631 duloxetine-treated women in 4 randomized, placebo-controlled trials were included in a cluster analysis based on outcomes after up to 12 weeks of treatment. Corresponding classification rules were constructed using a classification tree method. Probabilities for transitioning from baseline to Week 12 category were estimated for placebo and duloxetine patients (Ntotal = 1188) using logistic regression. Five clusters were identified, from "worst" (high pain levels and severe mental/physical impairment) to "best" (low pain levels and nearly normal mental/physical function). For patients with moderate overall severity, mental and physical symptoms were less correlated, resulting in 2 distinct clusters based on these 2 symptom domains. Three key variables with threshold values were identified for classification of patients: Brief Pain Inventory (BPI) pain interference overall scores of <3.29 and <7.14, respectively, a Fibromyalgia Impact Questionnaire (FIQ) interference with work score of <2, and an FIQ depression score of ≥5. Patient characteristics and frequencies per baseline category were similar between treatments; >80% of patients were in the 3 worst categories. Duloxetine patients were significantly more likely to improve after 12 weeks than placebo patients. A sustained effect was seen with continued duloxetine treatment. FM patients are heterogeneous and can be classified into distinct subgroups by simple descriptive rules derived from only 3 variables, which may guide individual patient management. Duloxetine showed higher improvement rates than placebo and had a sustained effect beyond 12 weeks.

  5. Flexible and inflexible task sets: asymmetric interference when switching between emotional expression, sex, and age classification of perceived faces.

    PubMed

    Schuch, Stefanie; Werheid, Katja; Koch, Iring

    2012-01-01

    The present study investigated whether the processing characteristics of categorizing emotional facial expressions are different from those of categorizing facial age and sex information. Given that emotions change rapidly, it was hypothesized that processing facial expressions involves a more flexible task set that causes less between-task interference than the task sets involved in processing age or sex of a face. Participants switched between three tasks: categorizing a face as looking happy or angry (emotion task), young or old (age task), and male or female (sex task). Interference between tasks was measured by global interference and response interference. Both measures revealed patterns of asymmetric interference. Global between-task interference was reduced when a task was mixed with the emotion task. Response interference, as measured by congruency effects, was larger for the emotion task than for the nonemotional tasks. The results support the idea that processing emotional facial expression constitutes a more flexible task set that causes less interference (i.e., task-set "inertia") than processing the age or sex of a face.

  6. Laser agile illumination for object tracking and classification - Feasibility study

    NASA Technical Reports Server (NTRS)

    Scholl, Marija S.; Vanzyl, Jakob J.; Meinel, Aden B.; Meinel, Marjorie P.; Scholl, James W.

    1988-01-01

    The 'agile illumination' concept for discrimination between ICBM warheads and decoys involves a two-aperture illumination with coherent light, diffraction of light by propagation, and a resulting interference pattern on the object surface. A scanning two-beam interference pattern illuminates one object at a time; depending on the shape, momentum, spinning, and tumbling characteristics of the interrogated object, different temporal signals will be obtained for different classes of objects.

  7. Impact of IRT item misfit on score estimates and severity classifications: an examination of PROMIS depression and pain interference item banks.

    PubMed

    Zhao, Yue

    2017-03-01

    In patient-reported outcome research that utilizes item response theory (IRT), using statistical significance tests to detect misfit is usually the focus of IRT model-data fit evaluations. However, such evaluations rarely address the impact/consequence of using misfitting items on the intended clinical applications. This study was designed to evaluate the impact of IRT item misfit on score estimates and severity classifications and to demonstrate a recommended process of model-fit evaluation. Using secondary data sources collected from the Patient-Reported Outcome Measurement Information System (PROMIS) wave 1 testing phase, analyses were conducted based on PROMIS depression (28 items; 782 cases) and pain interference (41 items; 845 cases) item banks. The identification of misfitting items was assessed using Orlando and Thissen's summed-score item-fit statistics and graphical displays. The impact of misfit was evaluated according to the agreement of both IRT-derived T-scores and severity classifications between inclusion and exclusion of misfitting items. The examination of the presence and impact of misfit suggested that item misfit had a negligible impact on the T-score estimates and severity classifications with the general population sample in the PROMIS depression and pain interference item banks, implying that the impact of item misfit was insignificant. Findings support the T-score estimates in the two item banks as robust against item misfit at both the group and individual levels and add confidence to the use of T-scores for severity diagnosis in the studied sample. Recommendations on approaches for identifying item misfit (statistical significance) and assessing the misfit impact (practical significance) are given.

  8. A Fully Automated Trial Selection Method for Optimization of Motor Imagery Based Brain-Computer Interface.

    PubMed

    Zhou, Bangyan; Wu, Xiaopei; Lv, Zhao; Zhang, Lei; Guo, Xiaojin

    2016-01-01

    Independent component analysis (ICA) as a promising spatial filtering method can separate motor-related independent components (MRICs) from the multichannel electroencephalogram (EEG) signals. However, the unpredictable burst interferences may significantly degrade the performance of ICA-based brain-computer interface (BCI) system. In this study, we proposed a new algorithm frame to address this issue by combining the single-trial-based ICA filter with zero-training classifier. We developed a two-round data selection method to identify automatically the badly corrupted EEG trials in the training set. The "high quality" training trials were utilized to optimize the ICA filter. In addition, we proposed an accuracy-matrix method to locate the artifact data segments within a single trial and investigated which types of artifacts can influence the performance of the ICA-based MIBCIs. Twenty-six EEG datasets of three-class motor imagery were used to validate the proposed methods, and the classification accuracies were compared with that obtained by frequently used common spatial pattern (CSP) spatial filtering algorithm. The experimental results demonstrated that the proposed optimizing strategy could effectively improve the stability, practicality and classification performance of ICA-based MIBCI. The study revealed that rational use of ICA method may be crucial in building a practical ICA-based MIBCI system.

  9. Characterizing the SEMG patterns with myofascial pain using a multi-scale wavelet model through machine learning approaches.

    PubMed

    Lin, Yu-Ching; Yu, Nan-Ying; Jiang, Ching-Fen; Chang, Shao-Hsia

    2018-06-02

    In this paper, we introduce a newly developed multi-scale wavelet model for the interpretation of surface electromyography (SEMG) signals and validate the model's capability to characterize changes in neuromuscular activation in cases with myofascial pain syndrome (MPS) via machine learning methods. The SEMG data collected from normal (N = 30; 27 women, 3 men) and MPS subjects (N = 26; 22 women, 4 men) were adopted for this retrospective analysis. SMEGs were measured from the taut-band loci on both sides of the trapezius muscle on the upper back while he/she conducted a cyclic bilateral backward shoulder extension movement within 1 min. Classification accuracy of the SEMG model to differentiate MPS patients from normal subjects was 77% using template matching and 60% using K-means clustering. Classification consistency between the two machine learning methods was 87% in the normal group and 93% in the MPS group. The 2D feature graphs derived from the proposed multi-scale model revealed distinct patterns between normal subjects and MPS patients. The classification consistency using template matching and K-means clustering suggests the potential of using the proposed model to characterize interference pattern changes induced by MPS. Copyright © 2018. Published by Elsevier Ltd.

  10. Localizing text in scene images by boundary clustering, stroke segmentation, and string fragment classification.

    PubMed

    Yi, Chucai; Tian, Yingli

    2012-09-01

    In this paper, we propose a novel framework to extract text regions from scene images with complex backgrounds and multiple text appearances. This framework consists of three main steps: boundary clustering (BC), stroke segmentation, and string fragment classification. In BC, we propose a new bigram-color-uniformity-based method to model both text and attachment surface, and cluster edge pixels based on color pairs and spatial positions into boundary layers. Then, stroke segmentation is performed at each boundary layer by color assignment to extract character candidates. We propose two algorithms to combine the structural analysis of text stroke with color assignment and filter out background interferences. Further, we design a robust string fragment classification based on Gabor-based text features. The features are obtained from feature maps of gradient, stroke distribution, and stroke width. The proposed framework of text localization is evaluated on scene images, born-digital images, broadcast video images, and images of handheld objects captured by blind persons. Experimental results on respective datasets demonstrate that the framework outperforms state-of-the-art localization algorithms.

  11. CellCognition: time-resolved phenotype annotation in high-throughput live cell imaging.

    PubMed

    Held, Michael; Schmitz, Michael H A; Fischer, Bernd; Walter, Thomas; Neumann, Beate; Olma, Michael H; Peter, Matthias; Ellenberg, Jan; Gerlich, Daniel W

    2010-09-01

    Fluorescence time-lapse imaging has become a powerful tool to investigate complex dynamic processes such as cell division or intracellular trafficking. Automated microscopes generate time-resolved imaging data at high throughput, yet tools for quantification of large-scale movie data are largely missing. Here we present CellCognition, a computational framework to annotate complex cellular dynamics. We developed a machine-learning method that combines state-of-the-art classification with hidden Markov modeling for annotation of the progression through morphologically distinct biological states. Incorporation of time information into the annotation scheme was essential to suppress classification noise at state transitions and confusion between different functional states with similar morphology. We demonstrate generic applicability in different assays and perturbation conditions, including a candidate-based RNA interference screen for regulators of mitotic exit in human cells. CellCognition is published as open source software, enabling live-cell imaging-based screening with assays that directly score cellular dynamics.

  12. An Adaptive S-Method to Analyze Micro-Doppler Signals for Human Activity Classification

    PubMed Central

    Yang, Chao; Xia, Yuqing; Ma, Xiaolin; Zhang, Tao; Zhou, Zhou

    2017-01-01

    In this paper, we propose the multiwindow Adaptive S-method (AS-method) distribution approach used in the time-frequency analysis for radar signals. Based on the results of orthogonal Hermite functions that have good time-frequency resolution, we vary the length of window to suppress the oscillating component caused by cross-terms. This method can bring a better compromise in the auto-terms concentration and cross-terms suppressing, which contributes to the multi-component signal separation. Finally, the effective micro signal is extracted by threshold segmentation and envelope extraction. To verify the proposed method, six states of motion are separated by a classifier of a support vector machine (SVM) trained to the extracted features. The trained SVM can detect a human subject with an accuracy of 95.4% for two cases without interference. PMID:29186075

  13. An Adaptive S-Method to Analyze Micro-Doppler Signals for Human Activity Classification.

    PubMed

    Li, Fangmin; Yang, Chao; Xia, Yuqing; Ma, Xiaolin; Zhang, Tao; Zhou, Zhou

    2017-11-29

    In this paper, we propose the multiwindow Adaptive S-method (AS-method) distribution approach used in the time-frequency analysis for radar signals. Based on the results of orthogonal Hermite functions that have good time-frequency resolution, we vary the length of window to suppress the oscillating component caused by cross-terms. This method can bring a better compromise in the auto-terms concentration and cross-terms suppressing, which contributes to the multi-component signal separation. Finally, the effective micro signal is extracted by threshold segmentation and envelope extraction. To verify the proposed method, six states of motion are separated by a classifier of a support vector machine (SVM) trained to the extracted features. The trained SVM can detect a human subject with an accuracy of 95.4% for two cases without interference.

  14. Olfactory identification and Stroop interference converge in schizophrenia.

    PubMed Central

    Purdon, S E

    1998-01-01

    OBJECTIVE: To test the discriminant validity of a model predicting a dissociation between measures of right and left frontal lobe function in people with schizophrenia. PARTICIPANTS: Twenty-one clinically stable outpatients with schizophrenia. INTERVENTIONS: Patients were administered the University of Pennsylvania Smell Identification Test (UPSIT), the Stroop Color-Word Test (Stroop), and the Positive and Negative Syndrome Scale (PANSS). OUTCOME MEASURES: Scores on these tests and relation among scores. RESULTS: There was a convergence of UPSII and Stroop interference scores consistent with a common cerebral basis for limitations in olfactory identification and inhibition of distraction. There was also a divergence of UPSIT and Stroop reading scores suggesting that the olfactory identification limitation is distinct from a general limitation of attention or a dysfunction of the left dorsolateral prefrontal cortex. Most notable was the 81% classification convergence between the UPSIT and Stroop incongruous colour naming scores compared with the near-random 57% classification convergence of the UPSIT and Stroop reading scores. CONCLUSIONS: These data are consistent with a right orbitofrontal dysfunction in a subgroup of patients with schizophrenia, although the involvement of mesial temporal structures in both tasks must be ruled out with further study. A multifactorial model depicting contributions from diverse cerebral structures is required to describe the pathophysiology of schizophrenia. Valid behavioural methods for classifying suspected subgroups of patients with particular cerebral dysfunction would be of value in the construction of this model. PMID:9595890

  15. Graphene Nanoplatelet-Polymer Chemiresistive Sensor Arrays for the Detection and Discrimination of Chemical Warfare Agent Simulants.

    PubMed

    Wiederoder, Michael S; Nallon, Eric C; Weiss, Matt; McGraw, Shannon K; Schnee, Vincent P; Bright, Collin J; Polcha, Michael P; Paffenroth, Randy; Uzarski, Joshua R

    2017-11-22

    A cross-reactive array of semiselective chemiresistive sensors made of polymer-graphene nanoplatelet (GNP) composite coated electrodes was examined for detection and discrimination of chemical warfare agents (CWA). The arrays employ a set of chemically diverse polymers to generate a unique response signature for multiple CWA simulants and background interferents. The developed sensors' signal remains consistent after repeated exposures to multiple analytes for up to 5 days with a similar signal magnitude across different replicate sensors with the same polymer-GNP coating. An array of 12 sensors each coated with a different polymer-GNP mixture was exposed 100 times to a cycle of single analyte vapors consisting of 5 chemically similar CWA simulants and 8 common background interferents. The collected data was vector normalized to reduce concentration dependency, z-scored to account for baseline drift and signal-to-noise ratio, and Kalman filtered to reduce noise. The processed data was dimensionally reduced with principal component analysis and analyzed with four different machine learning algorithms to evaluate discrimination capabilities. For 5 similarly structured CWA simulants alone 100% classification accuracy was achieved. For all analytes tested 99% classification accuracy was achieved demonstrating the CWA discrimination capabilities of the developed system. The novel sensor fabrication methods and data processing techniques are attractive for development of sensor platforms for discrimination of CWA and other classes of chemical vapors.

  16. Modulation Classification of Satellite Communication Signals Using Cumulants and Neural Networks

    NASA Technical Reports Server (NTRS)

    Smith, Aaron; Evans, Michael; Downey, Joseph

    2017-01-01

    National Aeronautics and Space Administration (NASA)'s future communication architecture is evaluating cognitive technologies and increased system intelligence. These technologies are expected to reduce the operational complexity of the network, increase science data return, and reduce interference to self and others. In order to increase situational awareness, signal classification algorithms could be applied to identify users and distinguish sources of interference. A significant amount of previous work has been done in the area of automatic signal classification for military and commercial applications. As a preliminary step, we seek to develop a system with the ability to discern signals typically encountered in satellite communication. Proposed is an automatic modulation classifier which utilizes higher order statistics (cumulants) and an estimate of the signal-to-noise ratio. These features are extracted from baseband symbols and then processed by a neural network for classification. The modulation types considered are phase-shift keying (PSK), amplitude and phase-shift keying (APSK),and quadrature amplitude modulation (QAM). Physical layer properties specific to the Digital Video Broadcasting - Satellite- Second Generation (DVB-S2) standard, such as pilots and variable ring ratios, are also considered. This paper will provide simulation results of a candidate modulation classifier, and performance will be evaluated over a range of signal-to-noise ratios, frequency offsets, and nonlinear amplifier distortions.

  17. Pavement type and wear condition classification from tire cavity acoustic measurements with artificial neural networks.

    PubMed

    Masino, Johannes; Foitzik, Michael-Jan; Frey, Michael; Gauterin, Frank

    2017-06-01

    Tire road noise is the major contributor to traffic noise, which leads to general annoyance, speech interference, and sleep disturbances. Standardized methods to measure tire road noise are expensive, sophisticated to use, and they cannot be applied comprehensively. This paper presents a method to automatically classify different types of pavement and the wear condition to identify noisy road surfaces. The methods are based on spectra of time series data of the tire cavity sound, acquired under normal vehicle operation. The classifier, an artificial neural network, correctly predicts three pavement types, whereas there are few bidirectional mis-classifications for two pavements, which have similar physical characteristics. The performance measures of the classifier to predict a new or worn out condition are over 94.6%. One could create a digital map with the output of the presented method. On the basis of these digital maps, road segments with a strong impact on tire road noise could be automatically identified. Furthermore, the method can estimate the road macro-texture, which has an impact on the tire road friction especially on wet conditions. Overall, this digital map would have a great benefit for civil engineering departments, road infrastructure operators, and for advanced driver assistance systems.

  18. Discriminability effect on Garner interference: evidence from recognition of facial identity and expression

    PubMed Central

    Wang, Yamin; Fu, Xiaolan; Johnston, Robert A.; Yan, Zheng

    2013-01-01

    Using Garner’s speeded classification task existing studies demonstrated an asymmetric interference in the recognition of facial identity and facial expression. It seems that expression is hard to interfere with identity recognition. However, discriminability of identity and expression, a potential confounding variable, had not been carefully examined in existing studies. In current work, we manipulated discriminability of identity and expression by matching facial shape (long or round) in identity and matching mouth (opened or closed) in facial expression. Garner interference was found either from identity to expression (Experiment 1) or from expression to identity (Experiment 2). Interference was also found in both directions (Experiment 3) or in neither direction (Experiment 4). The results support that Garner interference tends to occur under condition of low discriminability of relevant dimension regardless of facial property. Our findings indicate that Garner interference is not necessarily related to interdependent processing in recognition of facial identity and expression. The findings also suggest that discriminability as a mediating factor should be carefully controlled in future research. PMID:24391609

  19. Classification and Prediction of RF Coupling inside A-320 and A-319 Airplanes using Feed Forward Neural Networks

    NASA Technical Reports Server (NTRS)

    Jafri, Madiha; Ely, Jay; Vahala, Linda

    2006-01-01

    Neural Network Modeling is introduced in this paper to classify and predict Interference Path Loss measurements on Airbus 319 and 320 airplanes. Interference patterns inside the aircraft are classified and predicted based on the locations of the doors, windows, aircraft structures and the communication/navigation system-of-concern. Modeled results are compared with measured data and a plan is proposed to enhance the modeling for better prediction of electromagnetic coupling problems inside aircraft.

  20. Resolving task rule incongruence during task switching by competitor rule suppression.

    PubMed

    Meiran, Nachshon; Hsieh, Shulan; Dimov, Eduard

    2010-07-01

    Task switching requires maintaining readiness to execute any task of a given set of tasks. However, when tasks switch, the readiness to execute the now-irrelevant task generates interference, as seen in the task rule incongruence effect. Overcoming such interference requires fine-tuned inhibition that impairs task readiness only minimally. In an experiment involving 2 object classification tasks and 2 location classification tasks, the authors show that irrelevant task rules that generate response conflicts are inhibited. This competitor rule suppression (CRS) is seen in response slowing in subsequent trials, when the competing rules become relevant. CRS is shown to operate on specific rules without affecting similar rules. CRS and backward inhibition, which is another inhibitory phenomenon, produced additive effects on reaction time, suggesting their mutual independence. Implications for current formal theories of task switching as well as for conflict monitoring theories are discussed. (c) 2010 APA, all rights reserved

  1. The highs and lows of object impossibility: effects of spatial frequency on holistic processing of impossible objects.

    PubMed

    Freud, Erez; Avidan, Galia; Ganel, Tzvi

    2015-02-01

    Holistic processing, the decoding of a stimulus as a unified whole, is a basic characteristic of object perception. Recent research using Garner's speeded classification task has shown that this processing style is utilized even for impossible objects that contain an inherent spatial ambiguity. In particular, similar Garner interference effects were found for possible and impossible objects, indicating similar holistic processing styles for the two object categories. In the present study, we further investigated the perceptual mechanisms that mediate such holistic representation of impossible objects. We relied on the notion that, whereas information embedded in the high-spatial-frequency (HSF) content supports fine-detailed processing of object features, the information conveyed by low spatial frequencies (LSF) is more crucial for the emergence of a holistic shape representation. To test the effects of image frequency on the holistic processing of impossible objects, participants performed the Garner speeded classification task on images of possible and impossible cubes filtered for their LSF and HSF information. For images containing only LSF, similar interference effects were observed for possible and impossible objects, indicating that the two object categories were processed in a holistic manner. In contrast, for the HSF images, Garner interference was obtained only for possible, but not for impossible objects. Importantly, we provided evidence to show that this effect could not be attributed to a lack of sensitivity to object possibility in the LSF images. Particularly, even for full-spectrum images, Garner interference was still observed for both possible and impossible objects. Additionally, performance in an object classification task revealed high sensitivity to object possibility, even for LSF images. Taken together, these findings suggest that the visual system can tolerate the spatial ambiguity typical to impossible objects by relying on information embedded in LSF, whereas HSF information may underlie the visual system's susceptibility to distortions in objects' spatial layouts.

  2. Object-oriented and pixel-based classification approach for land cover using airborne long-wave infrared hyperspectral data

    NASA Astrophysics Data System (ADS)

    Marwaha, Richa; Kumar, Anil; Kumar, Arumugam Senthil

    2015-01-01

    Our primary objective was to explore a classification algorithm for thermal hyperspectral data. Minimum noise fraction is applied to thermal hyperspectral data and eight pixel-based classifiers, i.e., constrained energy minimization, matched filter, spectral angle mapper (SAM), adaptive coherence estimator, orthogonal subspace projection, mixture-tuned matched filter, target-constrained interference-minimized filter, and mixture-tuned target-constrained interference minimized filter are tested. The long-wave infrared (LWIR) has not yet been exploited for classification purposes. The LWIR data contain emissivity and temperature information about an object. A highest overall accuracy of 90.99% was obtained using the SAM algorithm for the combination of thermal data with a colored digital photograph. Similarly, an object-oriented approach is applied to thermal data. The image is segmented into meaningful objects based on properties such as geometry, length, etc., which are grouped into pixels using a watershed algorithm and an applied supervised classification algorithm, i.e., support vector machine (SVM). The best algorithm in the pixel-based category is the SAM technique. SVM is useful for thermal data, providing a high accuracy of 80.00% at a scale value of 83 and a merge value of 90, whereas for the combination of thermal data with a colored digital photograph, SVM gives the highest accuracy of 85.71% at a scale value of 82 and a merge value of 90.

  3. Real-time Java simulations of multiple interference dielectric filters

    NASA Astrophysics Data System (ADS)

    Kireev, Alexandre N.; Martin, Olivier J. F.

    2008-12-01

    An interactive Java applet for real-time simulation and visualization of the transmittance properties of multiple interference dielectric filters is presented. The most commonly used interference filters as well as the state-of-the-art ones are embedded in this platform-independent applet which can serve research and education purposes. The Transmittance applet can be freely downloaded from the site http://cpc.cs.qub.ac.uk. Program summaryProgram title: Transmittance Catalogue identifier: AEBQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5778 No. of bytes in distributed program, including test data, etc.: 90 474 Distribution format: tar.gz Programming language: Java Computer: Developed on PC-Pentium platform Operating system: Any Java-enabled OS. Applet was tested on Windows ME, XP, Sun Solaris, Mac OS RAM: Variable Classification: 18 Nature of problem: Sophisticated wavelength selective multiple interference filters can include some tens or even hundreds of dielectric layers. The spectral response of such a stack is not obvious. On the other hand, there is a strong demand from application designers and students to get a quick insight into the properties of a given filter. Solution method: A Java applet was developed for the computation and the visualization of the transmittance of multilayer interference filters. It is simple to use and the embedded filter library can serve educational purposes. Also, its ability to handle complex structures will be appreciated as a useful research and development tool. Running time: Real-time simulations

  4. Performance Analysis of Classification Methods for Indoor Localization in Vlc Networks

    NASA Astrophysics Data System (ADS)

    Sánchez-Rodríguez, D.; Alonso-González, I.; Sánchez-Medina, J.; Ley-Bosch, C.; Díaz-Vilariño, L.

    2017-09-01

    Indoor localization has gained considerable attention over the past decade because of the emergence of numerous location-aware services. Research works have been proposed on solving this problem by using wireless networks. Nevertheless, there is still much room for improvement in the quality of the proposed classification models. In the last years, the emergence of Visible Light Communication (VLC) brings a brand new approach to high quality indoor positioning. Among its advantages, this new technology is immune to electromagnetic interference and has the advantage of having a smaller variance of received signal power compared to RF based technologies. In this paper, a performance analysis of seventeen machine leaning classifiers for indoor localization in VLC networks is carried out. The analysis is accomplished in terms of accuracy, average distance error, computational cost, training size, precision and recall measurements. Results show that most of classifiers harvest an accuracy above 90 %. The best tested classifier yielded a 99.0 % accuracy, with an average error distance of 0.3 centimetres.

  5. Localizing semantic interference from distractor sounds in picture naming: A dual-task study.

    PubMed

    Mädebach, Andreas; Kieseler, Marie-Luise; Jescheniak, Jörg D

    2017-10-13

    In this study we explored the locus of semantic interference in a novel picture-sound interference task in which participants name pictures while ignoring environmental distractor sounds. In a previous study using this task (Mädebach, Wöhner, Kieseler, & Jescheniak, in Journal of Experimental Psychology: Human Perception and Performance, 43, 1629-1646, 2017), we showed that semantically related distractor sounds (e.g., BARKING dog ) interfere with a picture-naming response (e.g., "horse") more strongly than unrelated distractor sounds do (e.g., DRUMMING drum ). In the experiment reported here, we employed the psychological refractory period (PRP) approach to explore the locus of this effect. We combined a geometric form classification task (square vs. circle; Task 1) with the picture-sound interference task (Task 2). The stimulus onset asynchrony (SOA) between the tasks was systematically varied (0 vs. 500 ms). There were three central findings. First, the semantic interference effect from distractor sounds was replicated. Second, picture naming (in Task 2) was slower with the short than with the long task SOA. Third, both effects were additive-that is, the semantic interference effects were of similar magnitude at both task SOAs. This suggests that the interference arises during response selection or later stages, not during early perceptual processing. This finding corroborates the theory that semantic interference from distractor sounds reflects a competitive selection mechanism in word production.

  6. tf_unet: Generic convolutional neural network U-Net implementation in Tensorflow

    NASA Astrophysics Data System (ADS)

    Akeret, Joel; Chang, Chihway; Lucchi, Aurelien; Refregier, Alexandre

    2016-11-01

    tf_unet mitigates radio frequency interference (RFI) signals in radio data using a special type of Convolutional Neural Network, the U-Net, that enables the classification of clean signal and RFI signatures in 2D time-ordered data acquired from a radio telescope. The code is not tied to a specific segmentation and can be used, for example, to detect radio frequency interference (RFI) in radio astronomy or galaxies and stars in widefield imaging data. This U-Net implementation can outperform classical RFI mitigation algorithms.

  7. Active Learning Strategies for Phenotypic Profiling of High-Content Screens.

    PubMed

    Smith, Kevin; Horvath, Peter

    2014-06-01

    High-content screening is a powerful method to discover new drugs and carry out basic biological research. Increasingly, high-content screens have come to rely on supervised machine learning (SML) to perform automatic phenotypic classification as an essential step of the analysis. However, this comes at a cost, namely, the labeled examples required to train the predictive model. Classification performance increases with the number of labeled examples, and because labeling examples demands time from an expert, the training process represents a significant time investment. Active learning strategies attempt to overcome this bottleneck by presenting the most relevant examples to the annotator, thereby achieving high accuracy while minimizing the cost of obtaining labeled data. In this article, we investigate the impact of active learning on single-cell-based phenotype recognition, using data from three large-scale RNA interference high-content screens representing diverse phenotypic profiling problems. We consider several combinations of active learning strategies and popular SML methods. Our results show that active learning significantly reduces the time cost and can be used to reveal the same phenotypic targets identified using SML. We also identify combinations of active learning strategies and SML methods which perform better than others on the phenotypic profiling problems we studied. © 2014 Society for Laboratory Automation and Screening.

  8. Effects of divided attention and speeded responding on implicit and explicit retrieval of artificial grammar knowledge.

    PubMed

    Helman, Shaun; Berry, Dianne C

    2003-07-01

    The artificial grammar (AG) learning literature (see, e.g., Mathews et al., 1989; Reber, 1967) has relied heavily on a single measure of implicitly acquired knowledge. Recent work comparing this measure (string classification) with a more indirect measure in which participants make liking ratings of novel stimuli (e.g., Manza & Bornstein, 1995; Newell & Bright, 2001) has shown that string classification (which we argue can be thought of as an explicit, rather than an implicit, measure of memory) gives rise to more explicit knowledge of the grammatical structure in learning strings and is more resilient to changes in surface features and processing between encoding and retrieval. We report data from two experiments that extend these findings. In Experiment 1, we showed that a divided attention manipulation (at retrieval) interfered with explicit retrieval of AG knowledge but did not interfere with implicit retrieval. In Experiment 2, we showed that forcing participants to respond within a very tight deadline resulted in the same asymmetric interference pattern between the tasks. In both experiments, we also showed that the type of information being retrieved influenced whether interference was observed. The results are discussed in terms of the relatively automatic nature of implicit retrieval and also with respect to the differences between analytic and nonanalytic processing (Whittlesea & Price, 2001).

  9. Investigation of Latent Traces Using Infrared Reflectance Hyperspectral Imaging

    NASA Astrophysics Data System (ADS)

    Schubert, Till; Wenzel, Susanne; Roscher, Ribana; Stachniss, Cyrill

    2016-06-01

    The detection of traces is a main task of forensics. Hyperspectral imaging is a potential method from which we expect to capture more fluorescence effects than with common forensic light sources. This paper shows that the use of hyperspectral imaging is suited for the analysis of latent traces and extends the classical concept to the conservation of the crime scene for retrospective laboratory analysis. We examine specimen of blood, semen and saliva traces in several dilution steps, prepared on cardboard substrate. As our key result we successfully make latent traces visible up to dilution factor of 1:8000. We can attribute most of the detectability to interference of electromagnetic light with the water content of the traces in the shortwave infrared region of the spectrum. In a classification task we use several dimensionality reduction methods (PCA and LDA) in combination with a Maximum Likelihood classifier, assuming normally distributed data. Further, we use Random Forest as a competitive approach. The classifiers retrieve the exact positions of labelled trace preparation up to highest dilution and determine posterior probabilities. By modelling the classification task with a Markov Random Field we are able to integrate prior information about the spatial relation of neighboured pixel labels.

  10. Mapping Winter Wheat with Multi-Temporal SAR and Optical Images in an Urban Agricultural Region

    PubMed Central

    Zhou, Tao; Pan, Jianjun; Zhang, Peiyu; Wei, Shanbao; Han, Tao

    2017-01-01

    Winter wheat is the second largest food crop in China. It is important to obtain reliable winter wheat acreage to guarantee the food security for the most populous country in the world. This paper focuses on assessing the feasibility of in-season winter wheat mapping and investigating potential classification improvement by using SAR (Synthetic Aperture Radar) images, optical images, and the integration of both types of data in urban agricultural regions with complex planting structures in Southern China. Both SAR (Sentinel-1A) and optical (Landsat-8) data were acquired, and classification using different combinations of Sentinel-1A-derived information and optical images was performed using a support vector machine (SVM) and a random forest (RF) method. The interference coherence and texture images were obtained and used to assess the effect of adding them to the backscatter intensity images on the classification accuracy. The results showed that the use of four Sentinel-1A images acquired before the jointing period of winter wheat can provide satisfactory winter wheat classification accuracy, with an F1 measure of 87.89%. The combination of SAR and optical images for winter wheat mapping achieved the best F1 measure–up to 98.06%. The SVM was superior to RF in terms of the overall accuracy and the kappa coefficient, and was faster than RF, while the RF classifier was slightly better than SVM in terms of the F1 measure. In addition, the classification accuracy can be effectively improved by adding the texture and coherence images to the backscatter intensity data. PMID:28587066

  11. Neural correlates of the number–size interference task in children

    PubMed Central

    Kaufmann, Liane; Koppelstaetter, Florian; Siedentopf, Christian; Haala, Ilka; Haberlandt, Edda; Zimmerhackl, Lothar-Bernd; Felber, Stefan; Ischebeck, Anja

    2010-01-01

    In this functional magnetic resonance imaging study, 17 children were asked to make numerical and physical magnitude classifications while ignoring the other stimulus dimension (number–size interference task). Digit pairs were either incongruent (3 8) or neutral (3 8). Generally, numerical magnitude interferes with font size (congruity effect). Moreover, relative to numerically adjacent digits far ones yield quicker responses (distance effect). Behaviourally, robust distance and congruity effects were observed in both tasks. imaging baselline contrasts revealed activations in frontal, parietal, occipital and cerebellar areas bilaterally. Different from results usually reported for adultssmaller distances activated frontal, but not (intra-)parietal areas in children. Congruity effects became significant only in physical comparisons. Thus, even with comparable behavioural performance, cerebral activation patterns may differ substantially between children and adults. PMID:16603917

  12. [Advance in interferogram data processing technique].

    PubMed

    Jing, Juan-Juan; Xiangli, Bin; Lü, Qun-Bo; Huang, Min; Zhou, Jin-Song

    2011-04-01

    Fourier transform spectrometry is a type of novel information obtaining technology, which integrated the functions of imaging and spectra, but the data that the instrument acquired is the interference data of the target, which is an intermediate data and couldn't be used directly, so data processing must be adopted for the successful application of the interferometric data In the present paper, data processing techniques are divided into two classes: general-purpose and special-type. First, the advance in universal interferometric data processing technique is introduced, then the special-type interferometric data extracting method and data processing technique is illustrated according to the classification of Fourier transform spectroscopy. Finally, the trends of interferogram data processing technique are discussed.

  13. Comparison between FCSRT and LASSI-L to Detect Early Stage Alzheimer's Disease.

    PubMed

    Matias-Guiu, Jordi A; Cabrera-Martín, María Nieves; Curiel, Rosie E; Valles-Salgado, María; Rognoni, Teresa; Moreno-Ramos, Teresa; Carreras, José Luis; Loewenstein, David A; Matías-Guiu, Jorge

    2018-01-01

    The Free and Cued Selective Reminding Test (FCSRT) is the most accurate test for the diagnosis of prodromal Alzheimer's disease (AD). Recently, a novel cognitive test, the Loewenstein-Acevedo Scale for Semantic Interference and Learning (LASSI-L), has been developed in order to provide an early diagnosis. To compare the diagnostic accuracy of the FCSRT and the LASSI-L for the diagnosis of AD in its preclinical and prodromal stages using 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) as a reference. Fifty patients consulting for subjective memory complaints without functional impairment and at risk for AD were enrolled and evaluated using FCSRT, LASSI-L, and FDG-PET. Participants were evaluated using a comprehensive neurological and neuropsychological protocol and were assessed with the FCSRT and LASSI-L. FDG-PET was acquired concomitantly and used for classification of patients as AD or non-AD according to brain metabolism using both visual and semi-quantitative methods. LASSI-L scores allowed a better classification of patients as AD/non-AD in comparison to FCSRT. Logistic regression analysis showed delayed recall and failure to recovery from proactive semantic interference from LASSI-L as independent statistically significant predictors, obtaining an area under the curve of 0.894. This area under the curve provided a better discrimination than the best FCSRT score (total delayed recall, area under the curve 0.708, p = 0.029). The LASSI-L, a cognitive stress test, was superior to FCSRT in the prediction of AD features on FDG-PET. This emphasizes the possibility to advance toward an earlier diagnosis of AD from a clinical perspective.

  14. [Research on Spectral Polarization Imaging System Based on Static Modulation].

    PubMed

    Zhao, Hai-bo; Li, Huan; Lin, Xu-ling; Wang, Zheng

    2015-04-01

    The main disadvantages of traditional spectral polarization imaging system are: complex structure, with moving parts, low throughput. A novel method of spectral polarization imaging system is discussed, which is based on static polarization intensity modulation combined with Savart polariscope interference imaging. The imaging system can obtain real-time information of spectral and four Stokes polarization messages. Compared with the conventional methods, the advantages of the imaging system are compactness, low mass and no moving parts, no electrical control, no slit and big throughput. The system structure and the basic theory are introduced. The experimental system is established in the laboratory. The experimental system consists of reimaging optics, polarization intensity module, interference imaging module, and CCD data collecting and processing module. The spectral range is visible and near-infrared (480-950 nm). The white board and the plane toy are imaged by using the experimental system. The ability of obtaining spectral polarization imaging information is verified. The calibration system of static polarization modulation is set up. The statistical error of polarization degree detection is less than 5%. The validity and feasibility of the basic principle is proved by the experimental result. The spectral polarization data captured by the system can be applied to object identification, object classification and remote sensing detection.

  15. Psycho-Social Aspects of Educating Epileptic Children: Roles for School Psychologists.

    ERIC Educational Resources Information Center

    Frank, Brenda B.

    1985-01-01

    Epileptic children may have physical and emotional needs which can interfere with learning and socialization. Current prevalence estimates, definitions, and classifications of epilepsy are surveyed. Factors affecting the epileptic child's school performance and specific learning problems are addressed. Specific roles are presented for school…

  16. Study on Interference Suppression Algorithms for Electronic Noses: A Review

    PubMed Central

    Liang, Zhifang; Zhang, Ci; Sun, Hao; Liu, Tao

    2018-01-01

    Electronic noses (e-nose) are composed of an appropriate pattern recognition system and a gas sensor array with a certain degree of specificity and broad spectrum characteristics. The gas sensors have their own shortcomings of being highly sensitive to interferences which has an impact on the detection of target gases. When there are interferences, the performance of the e-nose will deteriorate. Therefore, it is urgent to study interference suppression techniques for e-noses. This paper summarizes the sources of interferences and reviews the advances made in recent years in interference suppression for e-noses. According to the factors which cause interference, interferences can be classified into two types: interference caused by changes of operating conditions and interference caused by hardware failures. The existing suppression methods were summarized and analyzed from these two aspects. Since the interferences of e-noses are uncertain and unstable, it can be found that some nonlinear methods have good effects for interference suppression, such as methods based on transfer learning, adaptive methods, etc. PMID:29649152

  17. Study on the Classification of GAOFEN-3 Polarimetric SAR Images Using Deep Neural Network

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zhang, J.; Zhao, Z.

    2018-04-01

    Polarimetric Synthetic Aperture Radar (POLSAR) imaging principle determines that the image quality will be affected by speckle noise. So the recognition accuracy of traditional image classification methods will be reduced by the effect of this interference. Since the date of submission, Deep Convolutional Neural Network impacts on the traditional image processing methods and brings the field of computer vision to a new stage with the advantages of a strong ability to learn deep features and excellent ability to fit large datasets. Based on the basic characteristics of polarimetric SAR images, the paper studied the types of the surface cover by using the method of Deep Learning. We used the fully polarimetric SAR features of different scales to fuse RGB images to the GoogLeNet model based on convolution neural network Iterative training, and then use the trained model to test the classification of data validation.First of all, referring to the optical image, we mark the surface coverage type of GF-3 POLSAR image with 8m resolution, and then collect the samples according to different categories. To meet the GoogLeNet model requirements of 256 × 256 pixel image input and taking into account the lack of full-resolution SAR resolution, the original image should be pre-processed in the process of resampling. In this paper, POLSAR image slice samples of different scales with sampling intervals of 2 m and 1 m to be trained separately and validated by the verification dataset. Among them, the training accuracy of GoogLeNet model trained with resampled 2-m polarimetric SAR image is 94.89 %, and that of the trained SAR image with resampled 1 m is 92.65 %.

  18. Differentiation and classification of bacteria using vancomycin functionalized silver nanorods array based surface-enhanced raman spectroscopy an chemometric analysis

    USDA-ARS?s Scientific Manuscript database

    The intrinsic surface-enhanced Raman scattering (SERS) was used for differentiating and classifying bacterial species with chemometric data analysis. Such differentiation has often been conducted with an insufficient sample population and strong interference from the food matrices. To address these ...

  19. Singularities of interference of three waves with different polarization states.

    PubMed

    Kurzynowski, Piotr; Woźniak, Władysław A; Zdunek, Marzena; Borwińska, Monika

    2012-11-19

    We presented the interference setup which can produce interesting two-dimensional patterns in polarization state of the resulting light wave emerging from the setup. The main element of our setup is the Wollaston prism which gives two plane, linearly polarized waves (eigenwaves of both Wollaston's wedges) with linearly changed phase difference between them (along the x-axis). The third wave coming from the second arm of proposed polarization interferometer is linearly or circularly polarized with linearly changed phase difference along the y-axis. The interference of three plane waves with different polarization states (LLL - linear-linear-linear or LLC - linear-linear-circular) and variable change difference produce two-dimensional light polarization and phase distributions with some characteristic points and lines which can be claimed to constitute singularities of different types. The aim of this article is to find all kind of these phase and polarization singularities as well as their classification. We postulated in our theoretical simulations and verified in our experiments different kinds of polarization singularities, depending on which polarization parameter was considered (the azimuth and ellipticity angles or the diagonal and phase angles). We also observed the phase singularities as well as the isolated zero intensity points which resulted from the polarization singularities when the proper analyzer was used at the end of the setup. The classification of all these singularities as well as their relationships were analyzed and described.

  20. Facial expression recognition based on weber local descriptor and sparse representation

    NASA Astrophysics Data System (ADS)

    Ouyang, Yan

    2018-03-01

    Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.

  1. A Dimensionally Aligned Signal Projection for Classification of Unintended Radiated Emissions

    DOE PAGES

    Vann, Jason Michael; Karnowski, Thomas P.; Kerekes, Ryan; ...

    2017-04-24

    Characterization of unintended radiated emissions (URE) from electronic devices plays an important role in many research areas from electromagnetic interference to nonintrusive load monitoring to information system security. URE can provide insights for applications ranging from load disaggregation and energy efficiency to condition-based maintenance of equipment-based upon detected fault conditions. URE characterization often requires subject matter expertise to tailor transforms and feature extractors for the specific electrical devices of interest. We present a novel approach, named dimensionally aligned signal projection (DASP), for projecting aligned signal characteristics that are inherent to the physical implementation of many commercial electronic devices. These projectionsmore » minimize the need for an intimate understanding of the underlying physical circuitry and significantly reduce the number of features required for signal classification. We present three possible DASP algorithms that leverage frequency harmonics, modulation alignments, and frequency peak spacings, along with a two-dimensional image manipulation method for statistical feature extraction. To demonstrate the ability of DASP to generate relevant features from URE, we measured the conducted URE from 14 residential electronic devices using a 2 MS/s collection system. Furthermore, a linear discriminant analysis classifier was trained using DASP generated features and was blind tested resulting in a greater than 90% classification accuracy for each of the DASP algorithms and an accuracy of 99.1% when DASP features are used in combination. Furthermore, we show that a rank reduced feature set of the combined DASP algorithms provides a 98.9% classification accuracy with only three features and outperforms a set of spectral features in terms of general classification as well as applicability across a broad number of devices.« less

  2. A Dimensionally Aligned Signal Projection for Classification of Unintended Radiated Emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vann, Jason Michael; Karnowski, Thomas P.; Kerekes, Ryan

    Characterization of unintended radiated emissions (URE) from electronic devices plays an important role in many research areas from electromagnetic interference to nonintrusive load monitoring to information system security. URE can provide insights for applications ranging from load disaggregation and energy efficiency to condition-based maintenance of equipment-based upon detected fault conditions. URE characterization often requires subject matter expertise to tailor transforms and feature extractors for the specific electrical devices of interest. We present a novel approach, named dimensionally aligned signal projection (DASP), for projecting aligned signal characteristics that are inherent to the physical implementation of many commercial electronic devices. These projectionsmore » minimize the need for an intimate understanding of the underlying physical circuitry and significantly reduce the number of features required for signal classification. We present three possible DASP algorithms that leverage frequency harmonics, modulation alignments, and frequency peak spacings, along with a two-dimensional image manipulation method for statistical feature extraction. To demonstrate the ability of DASP to generate relevant features from URE, we measured the conducted URE from 14 residential electronic devices using a 2 MS/s collection system. Furthermore, a linear discriminant analysis classifier was trained using DASP generated features and was blind tested resulting in a greater than 90% classification accuracy for each of the DASP algorithms and an accuracy of 99.1% when DASP features are used in combination. Furthermore, we show that a rank reduced feature set of the combined DASP algorithms provides a 98.9% classification accuracy with only three features and outperforms a set of spectral features in terms of general classification as well as applicability across a broad number of devices.« less

  3. Classification of normal and malignant human gastric mucosa tissue with confocal Raman microspectroscopy and wavelet analysis

    NASA Astrophysics Data System (ADS)

    Hu, Yaogai; Shen, Aiguo; Jiang, Tao; Ai, Yong; Hu, Jiming

    2008-02-01

    Thirty-two samples from the human gastric mucosa tissue, including 13 normal and 19 malignant tissue samples were measured by confocal Raman microspectroscopy. The low signal-to-background ratio spectra from human gastric mucosa tissues were obtained by this technique without any sample preparation. Raman spectral interferences include a broad featureless sloping background due to fluorescence and noise. They mask most Raman spectral feature and lead to problems with precision and quantitation of the original spectral information. A preprocessed algorithm based on wavelet analysis was used to reduce noise and eliminate background/baseline of Raman spectra. Comparing preprocessed spectra of malignant gastric mucosa tissues with those of counterpart normal ones, there were obvious spectral changes, including intensity increase at ˜1156 cm -1 and intensity decrease at ˜1587 cm -1. The quantitative criterion based upon the intensity ratio of the ˜1156 and ˜1587 cm -1 was extracted for classification of the normal and malignant gastric mucosa tissue samples. This could result in a new diagnostic method, which would assist the early diagnosis of gastric cancer.

  4. REM Sleep Enhancement of Probabilistic Classification Learning is Sensitive to Subsequent Interference

    PubMed Central

    Barsky, Murray M.; Tucker, Matthew A.; Stickgold, Robert

    2015-01-01

    During wakefulness the brain creates meaningful relationships between disparate stimuli in ways that escape conscious awareness. Processes active during sleep can strengthen these relationships, leading to more adaptive use of those stimuli when encountered during subsequent wake. Performance on the weather prediction task (WPT), a well-studied measure of implicit probabilistic learning, has been shown to improve significantly following a night of sleep, with stronger initial learning predicting more nocturnal REM sleep. We investigated this relationship further, studying the effect on WPT performance of a daytime nap containing REM sleep. We also added an interference condition after the nap/wake period as an additional probe of memory strength. Our results show that a nap significantly boosts WPT performance, and that this improvement is correlated with the amount of REM sleep obtained during the nap. When interference training is introduced following the nap, however, this REM-sleep benefit vanishes. In contrast, following an equal period of wake, performance is both unchanged from training and unaffected by interference training. Thus, while the true probabilistic relationships between WPT stimuli are strengthened by sleep, these changes are selectively susceptible to the destructive effects of retroactive interference, at least in the short term. PMID:25769506

  5. [Assessment of lipid layer thickness of tear film in the diagnosis of dry-eye syndrome in children after the hematopoietic stem cell transplantation].

    PubMed

    Kurpińska, Małgorzata; Gorczyńska, Ewa; Owoc-Lempach, Joanna; Bernacka, Aleksandra; Misiuk-Hojło, Marta; Chybicka, Alicja

    2011-01-01

    Dry eye syndrome (DES), also known as keratoconjunctivitis sicca (KCS) is recognized as the most frequent ocular complication after allogeneic stem cell transplantation (allo-SCT). KCS can appear either due to insufficient tear production or excessive tear evaporation, both resulting in tears hyperosmolarity that leads to ocular damage. The evaporation rate and better film stability is determined primarily by the status of the lipid layer. Observation and classification of tear film lipid layer interference patterns in normal and dry eyes in patients after allogeneic stem cell transplantation with a follow-up time of 6 months-5 years (median 26.54 months). Investigation of the relation between the lipid layer interference patterns in normal and dry eyes and the results of other dry eye examinations and complaints. Relation between DES and conditioning regimes, including total body irradiation and high-dose chemotherapy, immunosuppressive drugs, the time after allogeneic stem cell transplantation and chronic graft-versus-host disease. Precorneal tears lipid layer interference patterns, were examined in 114 eyes in treatment group with the Tearscope-plus. Patient with dry eye were identified on the basis of Schirmer test scores and/or tear breakup time, and positive lissamine and/or fluorescein staining. 42 of 114 eyes (36.8%) developed DES after allo-SCT A significant correlation between thickness of lipid layer and BUT, Schirmer test, lissamine green and fluorescein staining was found in the treatment group. A significant association was found between present chronic GVHD and DES in children. DES was not associated with TBI, corticosteroids, immunosuppressive drugs and the time in the present study. Tears lipid layer interference patterns are highly correlated with the diagnosis of DES. Tears lipid layer interference patterns ( noninvasive method), can be used to diagnose early DES in children after allo-SCT. Chronic GVHD play a major role in development of DES. dry eye syndrome, graft versus host disease, stem cell transplantation.

  6. Space and Missile Systems Center Standard: Systems Engineering Requirements and Products

    DTIC Science & Technology

    2013-07-01

    unique hazard classification and explosive ordnance disposal requirements. (2) Operational and maintenance facilities and equipment requirements. (3...PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 FOREWORD 1. This standard defines the Government’s requirements...49 4.3.14 Electromagnetic Interference and

  7. Quantitative phase imaging of arthropods

    PubMed Central

    Sridharan, Shamira; Katz, Aron; Soto-Adames, Felipe; Popescu, Gabriel

    2015-01-01

    Abstract. Classification of arthropods is performed by characterization of fine features such as setae and cuticles. An unstained whole arthropod specimen mounted on a slide can be preserved for many decades, but is difficult to study since current methods require sample manipulation or tedious image processing. Spatial light interference microscopy (SLIM) is a quantitative phase imaging (QPI) technique that is an add-on module to a commercial phase contrast microscope. We use SLIM to image a whole organism springtail Ceratophysella denticulata mounted on a slide. This is the first time, to our knowledge, that an entire organism has been imaged using QPI. We also demonstrate the ability of SLIM to image fine structures in addition to providing quantitative data that cannot be obtained by traditional bright field microscopy. PMID:26334858

  8. FastICA peel-off for ECG interference removal from surface EMG.

    PubMed

    Chen, Maoqi; Zhang, Xu; Chen, Xiang; Zhu, Mingxing; Li, Guanglin; Zhou, Ping

    2016-06-13

    Multi-channel recording of surface electromyographyic (EMG) signals is very likely to be contaminated by electrocardiographic (ECG) interference, specifically when the surface electrode is placed on muscles close to the heart. A novel fast independent component analysis (FastICA) based peel-off method is presented to remove ECG interference contaminating multi-channel surface EMG signals. Although demonstrating spatial variability in waveform shape, the ECG interference in different channels shares the same firing instants. Utilizing the firing information estimated from FastICA, ECG interference can be separated from surface EMG by a "peel off" processing. The performance of the method was quantified with synthetic signals by combining a series of experimentally recorded "clean" surface EMG and "pure" ECG interference. It was demonstrated that the new method can remove ECG interference efficiently with little distortion to surface EMG amplitude and frequency. The proposed method was also validated using experimental surface EMG signals contaminated by ECG interference. The proposed FastICA peel-off method can be used as a new and practical solution to eliminating ECG interference from multichannel EMG recordings.

  9. Utility of correlation techniques in gravity and magnetic interpretation

    NASA Technical Reports Server (NTRS)

    Chandler, V. W.; Koski, J. S.; Braice, L. W.; Hinze, W. J.

    1977-01-01

    Internal correspondence uses Poisson's Theorem in a moving-window linear regression analysis between the anomalous first vertical derivative of gravity and total magnetic field reduced to the pole. The regression parameters provide critical information on source characteristics. The correlation coefficient indicates the strength of the relation between magnetics and gravity. Slope value gives delta j/delta sigma estimates of the anomalous source. The intercept furnishes information on anomaly interference. Cluster analysis consists of the classification of subsets of data into groups of similarity based on correlation of selected characteristics of the anomalies. Model studies are used to illustrate implementation and interpretation procedures of these methods, particularly internal correspondence. Analysis of the results of applying these methods to data from the midcontinent and a transcontinental profile shows they can be useful in identifying crustal provinces, providing information on horizontal and vertical variations of physical properties over province size zones, validating long wavelength anomalies, and isolating geomagnetic field removal problems.

  10. Holistic processing of impossible objects: evidence from Garner's speeded-classification task.

    PubMed

    Freud, Erez; Avidan, Galia; Ganel, Tzvi

    2013-12-18

    Holistic processing, the decoding of the global structure of a stimulus while the local parts are not explicitly represented, is a basic characteristic of object perception. The current study was aimed to test whether such a representation could be created even for objects that violate fundamental principles of spatial organization, namely impossible objects. Previous studies argued that these objects cannot be represented holistically in long-term memory because they lack coherent 3D structure. Here, we utilized Garner's speeded classification task to test whether the perception of possible and impossible objects is mediated by similar holistic processing mechanisms. To this end, participants were asked to make speeded classifications of one object dimension while an irrelevant dimension was kept constant (baseline condition) or when this dimension varied (filtering condition). It is well accepted that ignoring the irrelevant dimension is impossible when holistic perception is mandatory, thus the extent of Garner interference in performance between the baseline and filtering conditions serves as an index of holistic processing. Critically, in Experiment 1, similar levels of Garner interference were found for possible and impossible objects implying holistic perception of both object types. Experiment 2 extended these results and demonstrated that even when depth information was explicitly processed, participants were still unable to process one dimension (width/depth) while ignoring the irrelevant dimension (depth/width, respectively). The results of Experiment 3 replicated the basic pattern found in Experiments 1 and 2 using a novel set of object exemplars. In Experiment 4, we used possible and impossible versions of the Penrose triangles in which information about impossibility is embedded in the internal elements of the objects which participant were explicitly asked to judge. As in Experiments 1-3, similar Garner interference was found for possible and impossible objects. Taken together, these findings emphasize the centrality of holistic processing style in object perception and suggest that it applies even for atypical stimuli such as impossible objects. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Event Recognition for Contactless Activity Monitoring Using Phase-Modulated Continuous Wave Radar.

    PubMed

    Forouzanfar, Mohamad; Mabrouk, Mohamed; Rajan, Sreeraman; Bolic, Miodrag; Dajani, Hilmi R; Groza, Voicu Z

    2017-02-01

    The use of remote sensing technologies such as radar is gaining popularity as a technique for contactless detection of physiological signals and analysis of human motion. This paper presents a methodology for classifying different events in a collection of phase modulated continuous wave radar returns. The primary application of interest is to monitor inmates where the presence of human vital signs amidst different, interferences needs to be identified. A comprehensive set of features is derived through time and frequency domain analyses of the radar returns. The Bhattacharyya distance is used to preselect the features with highest class separability as the possible candidate features for use in the classification process. The uncorrelated linear discriminant analysis is performed to decorrelate, denoise, and reduce the dimension of the candidate feature set. Linear and quadratic Bayesian classifiers are designed to distinguish breathing, different human motions, and nonhuman motions. The performance of these classifiers is evaluated on a pilot dataset of radar returns that contained different events including breathing, stopped breathing, simple human motions, and movement of fan and water. Our proposed pattern classification system achieved accuracies of up to 93% in stationary subject detection, 90% in stop-breathing detection, and 86% in interference detection. Our proposed radar pattern recognition system was able to accurately distinguish the predefined events amidst interferences. Besides inmate monitoring and suicide attempt detection, this paper can be extended to other radar applications such as home-based monitoring of elderly people, apnea detection, and home occupancy detection.

  12. Optimization of a chemical identification algorithm

    NASA Astrophysics Data System (ADS)

    Chyba, Thomas H.; Fisk, Brian; Gunning, Christin; Farley, Kevin; Polizzi, Amber; Baughman, David; Simpson, Steven; Slamani, Mohamed-Adel; Almassy, Robert; Da Re, Ryan; Li, Eunice; MacDonald, Steve; Slamani, Ahmed; Mitchell, Scott A.; Pendell-Jones, Jay; Reed, Timothy L.; Emge, Darren

    2010-04-01

    A procedure to evaluate and optimize the performance of a chemical identification algorithm is presented. The Joint Contaminated Surface Detector (JCSD) employs Raman spectroscopy to detect and identify surface chemical contamination. JCSD measurements of chemical warfare agents, simulants, toxic industrial chemicals, interferents and bare surface backgrounds were made in the laboratory and under realistic field conditions. A test data suite, developed from these measurements, is used to benchmark algorithm performance throughout the improvement process. In any one measurement, one of many possible targets can be present along with interferents and surfaces. The detection results are expressed as a 2-category classification problem so that Receiver Operating Characteristic (ROC) techniques can be applied. The limitations of applying this framework to chemical detection problems are discussed along with means to mitigate them. Algorithmic performance is optimized globally using robust Design of Experiments and Taguchi techniques. These methods require figures of merit to trade off between false alarms and detection probability. Several figures of merit, including the Matthews Correlation Coefficient and the Taguchi Signal-to-Noise Ratio are compared. Following the optimization of global parameters which govern the algorithm behavior across all target chemicals, ROC techniques are employed to optimize chemical-specific parameters to further improve performance.

  13. Incongruent Abstract Stimulus-Response Bindings Result in Response Interference: fMRI and EEG Evidence from Visual Object Classification Priming

    ERIC Educational Resources Information Center

    Horner, Aidan J.; Henson, Richard N.

    2012-01-01

    Stimulus repetition often leads to facilitated processing, resulting in neural decreases (repetition suppression) and faster RTs (repetition priming). Such repetition-related effects have been attributed to the facilitation of repeated cognitive processes and/or the retrieval of previously encoded stimulus-response (S-R) bindings. Although…

  14. Immune Interference After Sequential Alphavirus Vaccine Vaccinations

    DTIC Science & Technology

    2009-01-01

    education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or...western equine encephalitis (EEE and WEE) vaccines before live attenuated Venezuelan (VEE) vaccine had significantly lower rates of antibody response than...Venezuelan equine encephalitis virus, VEE, vaccines, alphavirus, antibody responses, human studies 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF

  15. An optimized ERP brain-computer interface based on facial expression changes.

    PubMed

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

  16. An optimized ERP brain-computer interface based on facial expression changes

    NASA Astrophysics Data System (ADS)

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). Significance. The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.

  17. Artifacts on electroencephalograms may influence the amplitude-integrated EEG classification: a qualitative analysis in neonatal encephalopathy.

    PubMed

    Hagmann, Cornelia Franziska; Robertson, Nicola Jayne; Azzopardi, Denis

    2006-12-01

    This is a case report and a descriptive study demonstrating that artifacts are common during long-term recording of amplitude-integrated electroencephalograms and may lead to erroneous classification of the amplitude-integrated electroencephalogram trace. Artifacts occurred in 12% of 200 hours of recording time sampled from a representative sample of 20 infants with neonatal encephalopathy. Artifacts derived from electrical or movement interference occurred with similar frequency; both types of artifacts influenced the voltage and width of the amplitude-integrated electroencephalogram band. This is important knowledge especially if amplitude-integrated electroencephalogram is used as a selection tool for neuroprotection intervention studies.

  18. Sequence-sensitive exemplar and decision-bound accounts of speeded-classification performance in a modified Garner-tasks paradigm.

    PubMed

    Little, Daniel R; Wang, Tony; Nosofsky, Robert M

    2016-09-01

    Among the most fundamental results in the area of perceptual classification are the "correlated facilitation" and "filtering interference" effects observed in Garner's (1974) speeded categorization tasks: In the case of integral-dimension stimuli, relative to a control task, single-dimension classification is faster when there is correlated variation along a second dimension, but slower when there is orthogonal variation that cannot be filtered out (e.g., by attention). These fundamental effects may result from participants' use of a trial-by-trial bypass strategy in the control and correlated tasks: The observer changes the previous category response whenever the stimulus changes, and maintains responses if the stimulus repeats. Here we conduct modified versions of the Garner tasks that eliminate the availability of a pure bypass strategy. The fundamental facilitation and interference effects remain, but are still largely explainable in terms of pronounced sequential effects in all tasks. We develop sequence-sensitive versions of exemplar-retrieval and decision-bound models aimed at capturing the detailed, trial-by-trial response-time distribution data. The models combine assumptions involving: (i) strengthened perceptual/memory representations of stimuli that repeat across consecutive trials, and (ii) a bias to change category responses on trials in which the stimulus changes. These models can predict our observed effects and provide a more complete account of the underlying bases of performance in our modified Garner tasks. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Tactile and bone-conduction auditory brain computer interface for vision and hearing impaired users.

    PubMed

    Rutkowski, Tomasz M; Mori, Hiromu

    2015-04-15

    The paper presents a report on the recently developed BCI alternative for users suffering from impaired vision (lack of focus or eye-movements) or from the so-called "ear-blocking-syndrome" (limited hearing). We report on our recent studies of the extents to which vibrotactile stimuli delivered to the head of a user can serve as a platform for a brain computer interface (BCI) paradigm. In the proposed tactile and bone-conduction auditory BCI novel multiple head positions are used to evoke combined somatosensory and auditory (via the bone conduction effect) P300 brain responses, in order to define a multimodal tactile and bone-conduction auditory brain computer interface (tbcaBCI). In order to further remove EEG interferences and to improve P300 response classification synchrosqueezing transform (SST) is applied. SST outperforms the classical time-frequency analysis methods of the non-linear and non-stationary signals such as EEG. The proposed method is also computationally more effective comparing to the empirical mode decomposition. The SST filtering allows for online EEG preprocessing application which is essential in the case of BCI. Experimental results with healthy BCI-naive users performing online tbcaBCI, validate the paradigm, while the feasibility of the concept is illuminated through information transfer rate case studies. We present a comparison of the proposed SST-based preprocessing method, combined with a logistic regression (LR) classifier, together with classical preprocessing and LDA-based classification BCI techniques. The proposed tbcaBCI paradigm together with data-driven preprocessing methods are a step forward in robust BCI applications research. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Description and evaluation of an interference assessment for a slotted-wall wind tunnel

    NASA Technical Reports Server (NTRS)

    Kemp, William B., Jr.

    1991-01-01

    A wind-tunnel interference assessment method applicable to test sections with discrete finite-length wall slots is described. The method is based on high order panel method technology and uses mixed boundary conditions to satisfy both the tunnel geometry and wall pressure distributions measured in the slotted-wall region. Both the test model and its sting support system are represented by distributed singularities. The method yields interference corrections to the model test data as well as surveys through the interference field at arbitrary locations. These results include the equivalent of tunnel Mach calibration, longitudinal pressure gradient, tunnel flow angularity, wall interference, and an inviscid form of sting interference. Alternative results which omit the direct contribution of the sting are also produced. The method was applied to the National Transonic Facility at NASA Langley Research Center for both tunnel calibration tests and tests of two models of subsonic transport configurations.

  1. Detection and Monitoring of Oil Spills Using Moderate/High-Resolution Remote Sensing Images.

    PubMed

    Li, Ying; Cui, Can; Liu, Zexi; Liu, Bingxin; Xu, Jin; Zhu, Xueyuan; Hou, Yongchao

    2017-07-01

    Current marine oil spill detection and monitoring methods using high-resolution remote sensing imagery are quite limited. This study presented a new bottom-up and top-down visual saliency model. We used Landsat 8, GF-1, MAMS, HJ-1 oil spill imagery as dataset. A simplified, graph-based visual saliency model was used to extract bottom-up saliency. It could identify the regions with high visual saliency object in the ocean. A spectral similarity match model was used to obtain top-down saliency. It could distinguish oil regions and exclude the other salient interference by spectrums. The regions of interest containing oil spills were integrated using these complementary saliency detection steps. Then, the genetic neural network was used to complete the image classification. These steps increased the speed of analysis. For the test dataset, the average running time of the entire process to detect regions of interest was 204.56 s. During image segmentation, the oil spill was extracted using a genetic neural network. The classification results showed that the method had a low false-alarm rate (high accuracy of 91.42%) and was able to increase the speed of the detection process (fast runtime of 19.88 s). The test image dataset was composed of different types of features over large areas in complicated imaging conditions. The proposed model was proved to be robust in complex sea conditions.

  2. Natural image statistics and low-complexity feature selection.

    PubMed

    Vasconcelos, Manuela; Vasconcelos, Nuno

    2009-02-01

    Low-complexity feature selection is analyzed in the context of visual recognition. It is hypothesized that high-order dependences of bandpass features contain little information for discrimination of natural images. This hypothesis is characterized formally by the introduction of the concepts of conjunctive interference and decomposability order of a feature set. Necessary and sufficient conditions for the feasibility of low-complexity feature selection are then derived in terms of these concepts. It is shown that the intrinsic complexity of feature selection is determined by the decomposability order of the feature set and not its dimension. Feature selection algorithms are then derived for all levels of complexity and are shown to be approximated by existing information-theoretic methods, which they consistently outperform. The new algorithms are also used to objectively test the hypothesis of low decomposability order through comparison of classification performance. It is shown that, for image classification, the gain of modeling feature dependencies has strongly diminishing returns: best results are obtained under the assumption of decomposability order 1. This suggests a generic law for bandpass features extracted from natural images: that the effect, on the dependence of any two features, of observing any other feature is constant across image classes.

  3. Affective Aspects of Perceived Loss of Control and Potential Implications for Brain-Computer Interfaces.

    PubMed

    Grissmann, Sebastian; Zander, Thorsten O; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter

    2017-01-01

    Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios.

  4. Affective Aspects of Perceived Loss of Control and Potential Implications for Brain-Computer Interfaces

    PubMed Central

    Grissmann, Sebastian; Zander, Thorsten O.; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter

    2017-01-01

    Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios. PMID:28769776

  5. Post-standardization of routine creatinine assays: are they suitable for clinical applications.

    PubMed

    Jassam, Nuthar; Weykamp, Cas; Thomas, Annette; Secchiero, Sandra; Sciacovelli, Laura; Plebani, Mario; Thelen, Marc; Cobbaert, Christa; Perich, Carmen; Ricós, Carmen; Paula, Faria A; Barth, Julian H

    2017-05-01

    Introduction Reliable serum creatinine measurements are of vital importance for the correct classification of chronic kidney disease and early identification of kidney injury. The National Kidney Disease Education Programme working group and other groups have defined clinically acceptable analytical limits for creatinine methods. The aim of this study was to re-evaluate the performance of routine creatinine methods in the light of these defined limits so as to assess their suitability for clinical practice. Method In collaboration with the Dutch External Quality Assurance scheme, six frozen commutable samples, with a creatinine concentration ranging from 80 to 239  μmol/L and traceable to isotope dilution mass spectrometry, were circulated to 91 laboratories in four European countries for creatinine measurement and estimated glomerular filtration rate calculation. Two out of the six samples were spiked with glucose to give high and low final concentrations of glucose. Results Results from 89 laboratories were analysed for bias, imprecision (%CV) for each creatinine assay and total error for estimated glomerular filtration rate. The participating laboratories used analytical instruments from four manufacturers; Abbott, Beckman, Roche and Siemens. All enzymatic methods in this study complied with the National Kidney Disease Education Programme working group recommended limits of bias of 5% above a creatinine concentration of 100  μmol/L. They also did not show any evidence of interference from glucose. In addition, they also showed compliance with the clinically recommended %CV of ≤4% across the analytical range. In contrast, the Jaffe methods showed variable performance with regard to the interference of glucose and unsatisfactory bias and precision. Conclusion Jaffe-based creatinine methods still exhibit considerable analytical variability in terms of bias, imprecision and lack of specificity, and this variability brings into question their clinical utility. We believe that clinical laboratories and manufacturers should work together to phase out the use of relatively non-specific Jaffe methods and replace them with more specific methods that are enzyme based.

  6. Tensorial dynamic time warping with articulation index representation for efficient audio-template learning.

    PubMed

    Le, Long N; Jones, Douglas L

    2018-03-01

    Audio classification techniques often depend on the availability of a large labeled training dataset for successful performance. However, in many application domains of audio classification (e.g., wildlife monitoring), obtaining labeled data is still a costly and laborious process. Motivated by this observation, a technique is proposed to efficiently learn a clean template from a few labeled, but likely corrupted (by noise and interferences), data samples. This learning can be done efficiently via tensorial dynamic time warping on the articulation index-based time-frequency representations of audio data. The learned template can then be used in audio classification following the standard template-based approach. Experimental results show that the proposed approach outperforms both (1) the recurrent neural network approach and (2) the state-of-the-art in the template-based approach on a wildlife detection application with few training samples.

  7. Interference Mitigation Effects on Synthetic Aperture Radar Coherent Data Products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musgrove, Cameron

    2014-05-01

    For synthetic aperture radar image products interference can degrade the quality of the images while techniques to mitigate the interference also reduce the image quality. Usually the radar system designer will try to balance the amount of mitigation for the amount of interference to optimize the image quality. This may work well for many situations, but coherent data products derived from the image products are more sensitive than the human eye to distortions caused by interference and mitigation of interference. This dissertation examines the e ect that interference and mitigation of interference has upon coherent data products. An improvement tomore » the standard notch mitigation is introduced, called the equalization notch. Other methods are suggested to mitigation interference while improving the quality of coherent data products over existing methods.« less

  8. Pedigree data analysis with crossover interference.

    PubMed Central

    Browning, Sharon

    2003-01-01

    We propose a new method for calculating probabilities for pedigree genetic data that incorporates crossover interference using the chi-square models. Applications include relationship inference, genetic map construction, and linkage analysis. The method is based on importance sampling of unobserved inheritance patterns conditional on the observed genotype data and takes advantage of fast algorithms for no-interference models while using reweighting to allow for interference. We show that the method is effective for arbitrarily many markers with small pedigrees. PMID:12930760

  9. Detection of Coastline Deformation Using Remote Sensing and Geodetic Surveys

    NASA Astrophysics Data System (ADS)

    Sabuncu, A.; Dogru, A.; Ozener, H.; Turgut, B.

    2016-06-01

    The coastal areas are being destroyed due to the usage that effect the natural balance. Unconsciously sand mining from the sea for nearshore nourishment and construction uses are the main ones. Physical interferences for mining of sand cause an ecologic threat to the coastal environment. However, use of marine sand is inevitable because of economic reasons or unobtainable land-based sand resources. The most convenient solution in such a protection-usage dilemma is to reduce negative impacts of sand production from marine. This depends on the accurate determination of criteriaon production place, style, and amount of sand. With this motivation, nearshore geodedic surveying studies performed on Kilyos Campus of Bogazici University located on the Black Sea coast, north of Istanbul, Turkey between 2001-2002. The study area extends 1 km in the longshore. Geodetic survey was carried out in the summer of 2001 to detect the initial condition for the shoreline. Long-term seasonal changes in shoreline positions were determined biannually. The coast was measured with post-processed kinematic GPS. Besides, shoreline change has studied using Landsat imagery between the years 1986-2015. The data set of Landsat 5 imageries were dated 05.08.1986 and 31.08.2007 and Landsat 7 imageries were dated 21.07.2001 and 28.07.2015. Landcover types in the study area were analyzed on the basis of pixel based classification method. Firstly, unsupervised classification based on ISODATA (Iterative Self Organizing Data Analysis Technique) has been applied and spectral clusters have been determined that gives prior knowledge about the study area. In the second step, supervised classification was carried out by using the three different approaches which are minimum-distance, parallelepiped and maximum-likelihood. All pixel based classification processes were performed with ENVI 4.8 image processing software. Results of geodetic studies and classification outputs will be presented in this paper.

  10. Comparison of different shielding methods in acquisition of physiological signals.

    PubMed

    Yanbing Jiang; Ning Ji; Hui Wang; Xueyu Liu; Yanjuan Geng; Peng Li; Shixiong Chen; Guanglin Li

    2017-07-01

    Power line interference in the surrounding environment could usually introduce many difficulties when collecting and analyzing physiological signals. Since power line interference is usually several orders of amplitude larger than the physiological electrical signals, methods of suppressing power line interference should be considered during the signal acquisition. Many studies used a hardware or software band-stop filter to suppress power line interference but it could easily cause attenuations and distortions to the signal of interest. In this study, two kinds of methods that used different signals to drive the shields of the electrodes were proposed to reduce the impacts of power line interference. Three channels of two physiological signals (ECG and EMG) were simultaneously collected when the electrodes were not shielded (No-Shield), shielded by ground signals (GND-Shield) and shielded by buffered signals of the corresponding electrodes (Active-Shield), respectively, on a custom hardware platform based on TI ADS1299. The results showed that power line interference would be significantly suppressed when using shielding approaches, and the Active-Shield method could achieve the best performance with a power line interference reduction up to 36dB. The study suggested that the Active-Shield method at the analog front-end was a great candidate to reduce power line interference in routine acquisitions of physiological signals.

  11. Importance of resonance interference effects in multigroup self-shielding calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stachowski, R.E.; Protsik, R.

    1995-12-31

    The impact of the resonance interference method (RIF) on multigroup neutron cross sections is significant for major isotopes in the fuel, indicating the importance of resonance interference in the computation of gadolinia burnout and plutonium buildup. The self-shielding factor method with the RIF method effectively eliminates shortcomings in multigroup resonance calculations.

  12. Intelligibility of Target Signals in Sequential and Simultaneous Segregation Tasks

    DTIC Science & Technology

    2009-03-01

    SUBJECT TERMS Informational masking; energetic masking, multimasker penalty, speech perception 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...alter- nation rates were high enough to directly interfere with the perception of the F0 values of the speech signals and that they thus disrupted the...segregation effects seen in this experiment and those in which stream segregation with tones was examined. Experiments examining the perception of

  13. Characterization of an Outdoor Ambient Radio Frequency Environment

    DTIC Science & Technology

    2016-02-16

    radio frequency noise ”) prior to testing of a specific system under test (SUT). With this characterization, locations can be selected to avoid RF...spectrum analyzer, ambient RF noise floor, RF interference 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18...environment (sometimes referred to as “radio frequency noise ”) prior to testing of a specific system under test (SUT). With this characterization

  14. Identifying Breast Tumor Suppressors Using in Vitro and in Vivo RNAi Screens

    DTIC Science & Technology

    2011-10-01

    vivo RNA interference screen, breast cancer , tumor suppressor, leukemia inhibitory factor receptor (LIFR) 16. SECURITY CLASSIFICATION OF: 17...The identification of these genes will improve the understanding of the causes of breast cancer , which may lead to therapeutic advancements for... breast cancer prevention and treatment. BODY Objective 1: Identification of breast tumor suppressors using in vitro and in vivo RNAi screens

  15. Determination of toxigenic fungi and aflatoxins in nuts and dried fruits using imaging and spectroscopic techniques.

    PubMed

    Wu, Qifang; Xie, Lijuan; Xu, Huirong

    2018-06-30

    Nuts and dried fruits contain rich nutrients and are thus highly vulnerable to contamination with toxigenic fungi and aflatoxins because of poor weather, processing and storage conditions. Imaging and spectroscopic techniques have proven to be potential alternative tools to wet chemistry methods for efficient and non-destructive determination of contamination with fungi and toxins. Thus, this review provides an overview of the current developments and applications in frequently used food safety testing techniques, including near infrared spectroscopy (NIRS), mid-infrared spectroscopy (MIRS), conventional imaging techniques (colour imaging (CI) and hyperspectral imaging (HSI)), and fluorescence spectroscopy and imaging (FS/FI). Interesting classification and determination results can be found in both static and on/in-line real-time detection for contaminated nuts and dried fruits. Although these techniques offer many benefits over conventional methods, challenges remain in terms of heterogeneous distribution of toxins, background constituent interference, model robustness, detection limits, sorting efficiency, as well as instrument development. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Enhanced conditioned eyeblink response acquisition and proactive interference in anxiety vulnerable individuals

    PubMed Central

    Holloway, Jacqueline L.; Trivedi, Payal; Myers, Catherine E.; Servatius, Richard J.

    2012-01-01

    In classical conditioning, proactive interference may arise from experience with the conditioned stimulus (CS), the unconditional stimulus (US), or both, prior to their paired presentations. Interest in the application of proactive interference has extended to clinical populations as either a risk factor for disorders or as a secondary sign. Although the current literature is dense with comparisons of stimulus pre-exposure effects in animals, such comparisons are lacking in human subjects. As such, interpretation of proactive interference over studies as well as its generalization and utility in clinical research is limited. The present study was designed to assess eyeblink response acquisition after equal numbers of CS, US, and explicitly unpaired CS and US pre-exposures, as well as to evaluate how anxiety vulnerability might modulate proactive interference. In the current study, anxiety vulnerability was assessed using the State/Trait Anxiety Inventories as well as the adult and retrospective measures of behavioral inhibition (AMBI and RMBI, respectively). Participants were exposed to 1 of 4 possible pre-exposure contingencies: 30 CS, 30 US, 30 CS, and 30 US explicitly unpaired pre-exposures, or Context pre-exposure, immediately prior to standard delay training. Robust proactive interference was evident in all pre-exposure groups relative to Context pre-exposure, independent of anxiety classification, with CR acquisition attenuated at similar rates. In addition, trait anxious individuals were found to have enhanced overall acquisition as well as greater proactive interference relative to non-vulnerable individuals. The findings suggest that anxiety vulnerable individuals learn implicit associations faster, an effect which persists after the introduction of new stimulus contingencies. This effect is not due to enhanced sensitivity to the US. Such differences would have implications for the development of anxiety psychopathology within a learning framework. PMID:23162449

  17. Enhanced conditioned eyeblink response acquisition and proactive interference in anxiety vulnerable individuals.

    PubMed

    Holloway, Jacqueline L; Trivedi, Payal; Myers, Catherine E; Servatius, Richard J

    2012-01-01

    In classical conditioning, proactive interference may arise from experience with the conditioned stimulus (CS), the unconditional stimulus (US), or both, prior to their paired presentations. Interest in the application of proactive interference has extended to clinical populations as either a risk factor for disorders or as a secondary sign. Although the current literature is dense with comparisons of stimulus pre-exposure effects in animals, such comparisons are lacking in human subjects. As such, interpretation of proactive interference over studies as well as its generalization and utility in clinical research is limited. The present study was designed to assess eyeblink response acquisition after equal numbers of CS, US, and explicitly unpaired CS and US pre-exposures, as well as to evaluate how anxiety vulnerability might modulate proactive interference. In the current study, anxiety vulnerability was assessed using the State/Trait Anxiety Inventories as well as the adult and retrospective measures of behavioral inhibition (AMBI and RMBI, respectively). Participants were exposed to 1 of 4 possible pre-exposure contingencies: 30 CS, 30 US, 30 CS, and 30 US explicitly unpaired pre-exposures, or Context pre-exposure, immediately prior to standard delay training. Robust proactive interference was evident in all pre-exposure groups relative to Context pre-exposure, independent of anxiety classification, with CR acquisition attenuated at similar rates. In addition, trait anxious individuals were found to have enhanced overall acquisition as well as greater proactive interference relative to non-vulnerable individuals. The findings suggest that anxiety vulnerable individuals learn implicit associations faster, an effect which persists after the introduction of new stimulus contingencies. This effect is not due to enhanced sensitivity to the US. Such differences would have implications for the development of anxiety psychopathology within a learning framework.

  18. Heart rate calculation from ensemble brain wave using wavelet and Teager-Kaiser energy operator.

    PubMed

    Srinivasan, Jayaraman; Adithya, V

    2015-01-01

    Electroencephalogram (EEG) signal artifacts are caused by various factors, such as, Electro-oculogram (EOG), Electromyogram (EMG), Electrocardiogram (ECG), movement artifact and line interference. The relatively high electrical energy cardiac activity causes EEG artifacts. In EEG signal processing the general approach is to remove the ECG signal. In this paper, we introduce an automated method to extract the ECG signal from EEG using wavelet and Teager-Kaiser energy operator for R-peak enhancement and detection. From the detected R-peaks the heart rate (HR) is calculated for clinical diagnosis. To check the efficiency of our method, we compare the HR calculated from ECG signal recorded in synchronous with EEG. The proposed method yields a mean error of 1.4% for the heart rate and 1.7% for mean R-R interval. The result illustrates that, proposed method can be used for ECG extraction from single channel EEG and used in clinical diagnosis like estimation for stress analysis, fatigue, and sleep stages classification studies as a multi-model system. In addition, this method eliminates the dependence of additional synchronous ECG in extraction of ECG from EEG signal process.

  19. A new method of evaluating the side wall interference effect on airfoil angle of attack by suction from the side walls

    NASA Technical Reports Server (NTRS)

    Sawada, H.; Sakakibara, S.; Sato, M.; Kanda, H.; Karasawa, T.

    1984-01-01

    A quantitative evaluation method of the suction effect from a suction plate on side walls is explained. It is found from wind tunnel tests that the wall interference is basically described by the summation form of wall interferences in the case of two dimensional flow and the interference of side walls.

  20. Determination of hydroxide and carbonate contents of alkaline electrolytes containing zinc

    NASA Technical Reports Server (NTRS)

    Otterson, D. A.

    1975-01-01

    A method to prevent zinc interference with the titration of OH- and CO3-2 ions in alkaline electrolytes with standard acid is presented. The Ba-EDTA complex was tested and shown to prevent zinc interference with acid-base titrations without introducing other types of interference. Theoretical considerations indicate that this method can be used to prevent interference by other metals.

  1. Two techniques for eliminating luminol interference material and flow system configurations for luminol and firefly luciferase systems

    NASA Technical Reports Server (NTRS)

    Thomas, R. R.

    1976-01-01

    Two methods for eliminating luminol interference materials are described. One method eliminates interference from organic material by pre-reacting a sample with dilute hydrogen peroxide. The reaction rate resolution method for eliminating inorganic forms of interference is also described. The combination of the two methods makes the luminol system more specific for bacteria. Flow system designs for both the firefly luciferase and luminol bacteria detection systems are described. The firefly luciferase flow system incorporating nitric acid extraction and optimal dilutions has a functional sensitivity of 3 x 100,000 E. coli/ml. The luminol flow system incorporates the hydrogen peroxide pretreatment and the reaction rate resolution techniques for eliminating interference. The functional sensitivity of the luminol flow system is 1 x 10,000 E. coli/ml.

  2. A New Reassigned Spectrogram Method in Interference Detection for GNSS Receivers.

    PubMed

    Sun, Kewen; Jin, Tian; Yang, Dongkai

    2015-09-02

    Interference detection is very important for Global Navigation Satellite System (GNSS) receivers. Current work on interference detection in GNSS receivers has mainly focused on time-frequency (TF) analysis techniques, such as spectrogram and Wigner-Ville distribution (WVD), where the spectrogram approach presents the TF resolution trade-off problem, since the analysis window is used, and the WVD method suffers from the very serious cross-term problem, due to its quadratic TF distribution nature. In order to solve the cross-term problem and to preserve good TF resolution in the TF plane at the same time, in this paper, a new TF distribution by using a reassigned spectrogram has been proposed in interference detection for GNSS receivers. This proposed reassigned spectrogram method efficiently combines the elimination of the cross-term provided by the spectrogram itself according to its inherent nature and the improvement of the TF aggregation property achieved by the reassignment method. Moreover, a notch filter has been adopted in interference mitigation for GNSS receivers, where receiver operating characteristics (ROCs) are used as metrics for the characterization of interference mitigation performance. The proposed interference detection method by using a reassigned spectrogram is evaluated by experiments on GPS L1 signals in the disturbing scenarios in comparison to the state-of-the-art TF analysis approaches. The analysis results show that the proposed interference detection technique effectively overcomes the cross-term problem and also keeps good TF localization properties, which has been proven to be valid and effective to enhance the interference Sensors 2015, 15 22168 detection performance; in addition, the adoption of the notch filter in interference mitigation has shown a significant acquisition performance improvement in terms of ROC curves for GNSS receivers in jamming environments.

  3. A New Reassigned Spectrogram Method in Interference Detection for GNSS Receivers

    PubMed Central

    Sun, Kewen; Jin, Tian; Yang, Dongkai

    2015-01-01

    Interference detection is very important for Global Navigation Satellite System (GNSS) receivers. Current work on interference detection in GNSS receivers has mainly focused on time-frequency (TF) analysis techniques, such as spectrogram and Wigner–Ville distribution (WVD), where the spectrogram approach presents the TF resolution trade-off problem, since the analysis window is used, and the WVD method suffers from the very serious cross-term problem, due to its quadratic TF distribution nature. In order to solve the cross-term problem and to preserve good TF resolution in the TF plane at the same time, in this paper, a new TF distribution by using a reassigned spectrogram has been proposed in interference detection for GNSS receivers. This proposed reassigned spectrogram method efficiently combines the elimination of the cross-term provided by the spectrogram itself according to its inherent nature and the improvement of the TF aggregation property achieved by the reassignment method. Moreover, a notch filter has been adopted in interference mitigation for GNSS receivers, where receiver operating characteristics (ROCs) are used as metrics for the characterization of interference mitigation performance. The proposed interference detection method by using a reassigned spectrogram is evaluated by experiments on GPS L1 signals in the disturbing scenarios in comparison to the state-of-the-art TF analysis approaches. The analysis results show that the proposed interference detection technique effectively overcomes the cross-term problem and also keeps good TF localization properties, which has been proven to be valid and effective to enhance the interference detection performance; in addition, the adoption of the notch filter in interference mitigation has shown a significant acquisition performance improvement in terms of ROC curves for GNSS receivers in jamming environments. PMID:26364637

  4. Numerical techniques for high-throughput reflectance interference biosensing

    NASA Astrophysics Data System (ADS)

    Sevenler, Derin; Ünlü, M. Selim

    2016-06-01

    We have developed a robust and rapid computational method for processing the raw spectral data collected from thin film optical interference biosensors. We have applied this method to Interference Reflectance Imaging Sensor (IRIS) measurements and observed a 10,000 fold improvement in processing time, unlocking a variety of clinical and scientific applications. Interference biosensors have advantages over similar technologies in certain applications, for example highly multiplexed measurements of molecular kinetics. However, processing raw IRIS data into useful measurements has been prohibitively time consuming for high-throughput studies. Here we describe the implementation of a lookup table (LUT) technique that provides accurate results in far less time than naive methods. We also discuss an additional benefit that the LUT method can be used with a wider range of interference layer thickness and experimental configurations that are incompatible with methods that require fitting the spectral response.

  5. An evaluation of deficits in semantic cueing and proactive and retroactive interference as early features of Alzheimer's disease.

    PubMed

    Crocco, Elizabeth; Curiel, Rosie E; Acevedo, Amarilis; Czaja, Sara J; Loewenstein, David A

    2014-09-01

    To determine the degree to which susceptibility to different types of semantic interference may reflect the initial manifestations of early Alzheimer's disease (AD) beyond the effects of global memory impairment. Normal elderly (NE) subjects (n = 47), subjects with amnestic mild cognitive impairment (aMCI; n = 34), and subjects with probable AD (n = 40) were evaluated by using a unique cued recall paradigm that allowed for evaluation of both proactive and retroactive interference effects while controlling for global memory impairment (i.e., Loewenstein-Acevedo Scales of Semantic Interference and Learning [LASSI-L] procedure). Controlling for overall memory impairment, aMCI subjects had much greater proactive and retroactive interference effects than NE subjects. LASSI-L indices of learning by using cued recall revealed high levels of sensitivity and specificity, with an overall correct classification rate of 90%. These measures provided better discrimination than traditional neuropsychological measures of memory function. The LASSI-L paradigm is unique and unlike other assessments of memory in that items posed for cued recall are explicitly presented, and semantic interference and cueing effects can be assessed while controlling for initial level of memory impairment. This is a powerful procedure that allows the participant to serve as his or her own control. The high levels of discrimination between subjects with aMCI and normal cognition that exceeded traditional neuropsychological measures makes the LASSI-L worthy of further research in the detection of early AD. Copyright © 2014 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  6. Method of managing interference during delay recovery on a train system

    DOEpatents

    Gordon, Susanna P.; Evans, John A.

    2005-12-27

    The present invention provides methods for preventing low train voltages and managing interference, thereby improving the efficiency, reliability, and passenger comfort associated with commuter trains. An algorithm implementing neural network technology is used to predict low voltages before they occur. Once voltages are predicted, then multiple trains can be controlled to prevent low voltage events. Further, algorithms for managing inference are presented in the present invention. Different types of interference problems are addressed in the present invention such as "Interference During Acceleration", "Interference Near Station Stops", and "Interference During Delay Recovery." Managing such interference avoids unnecessary brake/acceleration cycles during acceleration, immediately before station stops, and after substantial delays. Algorithms are demonstrated to avoid oscillatory brake/acceleration cycles due to interference and to smooth the trajectories of closely following trains. This is achieved by maintaining sufficient following distances to avoid unnecessary braking/accelerating. These methods generate smooth train trajectories, making for a more comfortable ride, and improve train motor reliability by avoiding unnecessary mode-changes between propulsion and braking. These algorithms can also have a favorable impact on traction power system requirements and energy consumption.

  7. An Improved Time-Frequency Analysis Method in Interference Detection for GNSS Receivers

    PubMed Central

    Sun, Kewen; Jin, Tian; Yang, Dongkai

    2015-01-01

    In this paper, an improved joint time-frequency (TF) analysis method based on a reassigned smoothed pseudo Wigner–Ville distribution (RSPWVD) has been proposed in interference detection for Global Navigation Satellite System (GNSS) receivers. In the RSPWVD, the two-dimensional low-pass filtering smoothing function is introduced to eliminate the cross-terms present in the quadratic TF distribution, and at the same time, the reassignment method is adopted to improve the TF concentration properties of the auto-terms of the signal components. This proposed interference detection method is evaluated by experiments on GPS L1 signals in the disturbing scenarios compared to the state-of-the-art interference detection approaches. The analysis results show that the proposed interference detection technique effectively overcomes the cross-terms problem and also preserves good TF localization properties, which has been proven to be effective and valid to enhance the interference detection performance of the GNSS receivers, particularly in the jamming environments. PMID:25905704

  8. Talker Localization Based on Interference between Transmitted and Reflected Audible Sound

    NASA Astrophysics Data System (ADS)

    Nakayama, Masato; Nakasako, Noboru; Shinohara, Toshihiro; Uebo, Tetsuji

    In many engineering fields, distance to targets is very important. General distance measurement method uses a time delay between transmitted and reflected waves, but it is difficult to estimate the short distance. On the other hand, the method using phase interference to measure the short distance has been known in the field of microwave radar. Therefore, we have proposed the distance estimation method based on interference between transmitted and reflected audible sound, which can measure the distance between microphone and target with one microphone and one loudspeaker. In this paper, we propose talker localization method based on distance estimation using phase interference. We expand the distance estimation method using phase interference into two microphones (microphone array) in order to estimate talker position. The proposed method can estimate talker position by measuring the distance and direction between target and microphone array. In addition, talker's speech is regarded as a noise in the proposed method. Therefore, we also propose combination of the proposed method and CSP (Cross-power Spectrum Phase analysis) method which is one of the DOA (Direction Of Arrival) estimation methods. We evaluated the performance of talker localization in real environments. The experimental result shows the effectiveness of the proposed method.

  9. Beam Design and User Scheduling for Nonorthogonal Multiple Access With Multiple Antennas Based on Pareto Optimality

    NASA Astrophysics Data System (ADS)

    Seo, Junyeong; Sung, Youngchul

    2018-06-01

    In this paper, an efficient transmit beam design and user scheduling method is proposed for multi-user (MU) multiple-input single-output (MISO) non-orthogonal multiple access (NOMA) downlink, based on Pareto-optimality. The proposed beam design and user scheduling method groups simultaneously-served users into multiple clusters with practical two users in each cluster, and then applies spatical zeroforcing (ZF) across clusters to control inter-cluster interference (ICI) and Pareto-optimal beam design with successive interference cancellation (SIC) to two users in each cluster to remove interference to strong users and leverage signal-to-interference-plus-noise ratios (SINRs) of interference-experiencing weak users. The proposed method has flexibility to control the rates of strong and weak users and numerical results show that the proposed method yields good performance.

  10. Improved signal processing approaches in an offline simulation of a hybrid brain–computer interface

    PubMed Central

    Brunner, Clemens; Allison, Brendan Z.; Krusienski, Dean J.; Kaiser, Vera; Müller-Putz, Gernot R.; Pfurtscheller, Gert; Neuper, Christa

    2012-01-01

    In a conventional brain–computer interface (BCI) system, users perform mental tasks that yield specific patterns of brain activity. A pattern recognition system determines which brain activity pattern a user is producing and thereby infers the user’s mental task, allowing users to send messages or commands through brain activity alone. Unfortunately, despite extensive research to improve classification accuracy, BCIs almost always exhibit errors, which are sometimes so severe that effective communication is impossible. We recently introduced a new idea to improve accuracy, especially for users with poor performance. In an offline simulation of a “hybrid” BCI, subjects performed two mental tasks independently and then simultaneously. This hybrid BCI could use two different types of brain signals common in BCIs – event-related desynchronization (ERD) and steady-state evoked potentials (SSEPs). This study suggested that such a hybrid BCI is feasible. Here, we re-analyzed the data from our initial study. We explored eight different signal processing methods that aimed to improve classification and further assess both the causes and the extent of the benefits of the hybrid condition. Most analyses showed that the improved methods described here yielded a statistically significant improvement over our initial study. Some of these improvements could be relevant to conventional BCIs as well. Moreover, the number of illiterates could be reduced with the hybrid condition. Results are also discussed in terms of dual task interference and relevance to protocol design in hybrid BCIs. PMID:20153371

  11. Evaluating the independence of sex and expression in judgments of faces.

    PubMed

    Le Gal, Patricia M; Bruce, Vicki

    2002-02-01

    Face recognition models suggest independent processing for functionally different types of information, such as identity, expression, sex, and facial speech. Interference between sex and expression information was tested using both a rating study and Garner's selective attention paradigm using speeded sex and expression decisions. When participants were asked to assess the masculinity of male and female angry and surprised faces, they found surprised faces to be more feminine than angry ones (Experiment 1). However, in a speeded-classification situation in the laboratory in which the sex decision was either "easy" relative to the expression decision (Experiment 2) or of more equivalent difficulty (Experiment 3), it was possible for participants to attend selectively to either dimension without interference from the other. Qualified support is offered for independent processing routes.

  12. Radio frequency interference mitigation using deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Akeret, J.; Chang, C.; Lucchi, A.; Refregier, A.

    2017-01-01

    We propose a novel approach for mitigating radio frequency interference (RFI) signals in radio data using the latest advances in deep learning. We employ a special type of Convolutional Neural Network, the U-Net, that enables the classification of clean signal and RFI signatures in 2D time-ordered data acquired from a radio telescope. We train and assess the performance of this network using the HIDE &SEEK radio data simulation and processing packages, as well as early Science Verification data acquired with the 7m single-dish telescope at the Bleien Observatory. We find that our U-Net implementation is showing competitive accuracy to classical RFI mitigation algorithms such as SEEK's SUMTHRESHOLD implementation. We publish our U-Net software package on GitHub under GPLv3 license.

  13. Evaluation of bilirubin concentration in hemolysed samples, is it really impossible? The altitude-curve cartography approach to interfered assays.

    PubMed

    Brunori, Paola; Masi, Piergiorgio; Faggiani, Luigi; Villani, Luciano; Tronchin, Michele; Galli, Claudio; Laube, Clarissa; Leoni, Antonella; Demi, Maila; La Gioia, Antonio

    2011-04-11

    Neonatal jaundice might lead to severe clinical consequences. Measurement of bilirubin in samples is interfered by hemolysis. Over a method-depending cut-off value of measured hemolysis, bilirubin value is not accepted and a new sample is required for evaluation although this is not always possible, especially with newborns and cachectic oncological patients. When usage of different methods, less prone to interferences, is not feasible an alternative recovery method for analytical significance of rejected data might help clinicians to take appropriate decisions. We studied the effects of hemolysis over total bilirubin measurement, comparing hemolysis-interfered bilirubin measurement with the non-interfered value. Interference curves were extrapolated over a wide range of bilirubin (0-30 mg/mL) and hemolysis (H Index 0-1100). Interference "altitude" curves were calculated and plotted. A bimodal acceptance table was calculated. Non-interfered bilirubin of given samples was calculated, by linear interpolation between the nearest lower and upper interference curves. Rejection of interference-sensitive data from hemolysed samples for every method should be based not upon the interferent concentration but upon a more complex algorithm based upon the concentration-dependent bimodal interaction between the interfered analyte and the measured interferent. The altitude-curve cartography approach to interfered assays may help laboratories to build up their own method-dependent algorithm and to improve the trueness of their data by choosing a cut-off value different from the one (-10% interference) proposed by manufacturers. When re-sampling or an alternative method is not available the altitude-curve cartography approach might also represent an alternative recovery method for analytical significance of rejected data. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Interference coupling analysis based on a hybrid method: application to a radio telescope system

    NASA Astrophysics Data System (ADS)

    Xu, Qing-Lin; Qiu, Yang; Tian, Jin; Liu, Qi

    2018-02-01

    Working in a way that passively receives electromagnetic radiation from a celestial body, a radio telescope can be easily disturbed by external radio frequency interference as well as electromagnetic interference generated by electric and electronic components operating at the telescope site. A quantitative analysis of these interferences must be taken into account carefully for further electromagnetic protection of the radio telescope. In this paper, based on electromagnetic topology theory, a hybrid method that combines the Baum-Liu-Tesche (BLT) equation and transfer function is proposed. In this method, the coupling path of the radio telescope is divided into strong coupling and weak coupling sub-paths, and the coupling intensity criterion is proposed by analyzing the conditions in which the BLT equation simplifies to a transfer function. According to the coupling intensity criterion, the topological model of a typical radio telescope system is established. The proposed method is used to solve the interference response of the radio telescope system by analyzing subsystems with different coupling modes separately and then integrating the responses of the subsystems as the response of the entire system. The validity of the proposed method is verified numerically. The results indicate that the proposed method, compared with the direct solving method, reduces the difficulty and improves the efficiency of interference prediction.

  15. [Research on the method of interference correction for nondispersive infrared multi-component gas analysis].

    PubMed

    Sun, You-Wen; Liu, Wen-Qing; Wang, Shi-Mei; Huang, Shu-Hua; Yu, Xiao-Man

    2011-10-01

    A method of interference correction for nondispersive infrared multi-component gas analysis was described. According to the successive integral gas absorption models and methods, the influence of temperature and air pressure on the integral line strengths and linetype was considered, and based on Lorentz detuning linetypes, the absorption cross sections and response coefficients of H2O, CO2, CO, and NO on each filter channel were obtained. The four dimension linear regression equations for interference correction were established by response coefficients, the absorption cross interference was corrected by solving the multi-dimensional linear regression equations, and after interference correction, the pure absorbance signal on each filter channel was only controlled by the corresponding target gas concentration. When the sample cell was filled with gas mixture with a certain concentration proportion of CO, NO and CO2, the pure absorbance after interference correction was used for concentration inversion, the inversion concentration error for CO2 is 2.0%, the inversion concentration error for CO is 1.6%, and the inversion concentration error for NO is 1.7%. Both the theory and experiment prove that the interference correction method proposed for NDIR multi-component gas analysis is feasible.

  16. Neural systems and time course of proactive interference in working memory.

    PubMed

    Du, Yingchun; Zhang, John X; Xiao, Zhuangwei; Wu, Renhua

    2007-01-01

    The storage of information in working memory suffers as a function of proactive interference. Many works using neuroimaging technique have been done to reveal the brain mechanism of interference resolution. However, less is yet known about the time course of this process. Event-related potential method(ERP) and standardized Low Resolution Brain Electromagnetic Tomography method (sLORETA) were used in this study to discover the time course of interference resolution in working memory. The anterior P2 was thought to reflect interference resolution and if so, this process occurred earlier in working memory than in long-term memory.

  17. Herschel's Interference Demonstration.

    ERIC Educational Resources Information Center

    Perkalskis, Benjamin S.; Freeman, J. Reuben

    2000-01-01

    Describes Herschel's demonstration of interference arising from many coherent rays. Presents a method for students to reproduce this demonstration and obtain beautiful multiple-beam interference patterns. (CCM)

  18. Classical reconstruction of interference patterns of position-wave-vector-entangled photon pairs by the time-reversal method

    NASA Astrophysics Data System (ADS)

    Ogawa, Kazuhisa; Kobayashi, Hirokazu; Tomita, Akihisa

    2018-02-01

    The quantum interference of entangled photons forms a key phenomenon underlying various quantum-optical technologies. It is known that the quantum interference patterns of entangled photon pairs can be reconstructed classically by the time-reversal method; however, the time-reversal method has been applied only to time-frequency-entangled two-photon systems in previous experiments. Here, we apply the time-reversal method to the position-wave-vector-entangled two-photon systems: the two-photon Young interferometer and the two-photon beam focusing system. We experimentally demonstrate that the time-reversed systems classically reconstruct the same interference patterns as the position-wave-vector-entangled two-photon systems.

  19. Approach for Text Classification Based on the Similarity Measurement between Normal Cloud Models

    PubMed Central

    Dai, Jin; Liu, Xin

    2014-01-01

    The similarity between objects is the core research area of data mining. In order to reduce the interference of the uncertainty of nature language, a similarity measurement between normal cloud models is adopted to text classification research. On this basis, a novel text classifier based on cloud concept jumping up (CCJU-TC) is proposed. It can efficiently accomplish conversion between qualitative concept and quantitative data. Through the conversion from text set to text information table based on VSM model, the text qualitative concept, which is extraction from the same category, is jumping up as a whole category concept. According to the cloud similarity between the test text and each category concept, the test text is assigned to the most similar category. By the comparison among different text classifiers in different feature selection set, it fully proves that not only does CCJU-TC have a strong ability to adapt to the different text features, but also the classification performance is also better than the traditional classifiers. PMID:24711737

  20. Spatiotemporal signal space separation method for rejecting nearby interference in MEG measurements

    NASA Astrophysics Data System (ADS)

    Taulu, S.; Simola, J.

    2006-04-01

    Limitations of traditional magnetoencephalography (MEG) exclude some important patient groups from MEG examinations, such as epilepsy patients with a vagus nerve stimulator, patients with magnetic particles on the head or having magnetic dental materials that cause severe movement-related artefact signals. Conventional interference rejection methods are not able to remove the artefacts originating this close to the MEG sensor array. For example, the reference array method is unable to suppress interference generated by sources closer to the sensors than the reference array, about 20-40 cm. The spatiotemporal signal space separation method proposed in this paper recognizes and removes both external interference and the artefacts produced by these nearby sources, even on the scalp. First, the basic separation into brain-related and external interference signals is accomplished with signal space separation based on sensor geometry and Maxwell's equations only. After this, the artefacts from nearby sources are extracted by a simple statistical analysis in the time domain, and projected out. Practical examples with artificial current dipoles and interference sources as well as data from real patients demonstrate that the method removes the artefacts without altering the field patterns of the brain signals.

  1. Interference correction by extracting the information of interference dominant regions: Application to near-infrared spectra

    NASA Astrophysics Data System (ADS)

    Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen

    2014-08-01

    Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.

  2. Impaired memory for material related to a problem solved prior to encoding: suppression at learning or interference at recall?

    PubMed

    Kowalczyk, Marek

    2017-07-01

    Earlier research by the author revealed that material encoded incidentally in a speeded affective classification task and related to the demands of a divergent problem tends to be recalled worse in participants who solved the problem prior to encoding than in participants in the control, no-problem condition. The aim of the present experiment was to replicate this effect with a new, size-comparison orienting task, and to test for possible mechanisms of impaired recall. Participants either solved a problem before the orienting task or not, and classified each item in this task either once or three times. There was a reliable effect of impaired recall of problem-related items in the repetition condition, but not in the no-repetition condition. Solving the problem did not influence repetition priming for these items. These results support an account that attributes the impaired recall to inhibitory processes at learning and speak against a proactive interference explanation. However, they can be also accommodated by an account that refers to inefficient context cues and competitor interference at retrieval.

  3. A simplified fourwall interference assessment procedure for airfoil data obtained in the Langley 0.3-meter transonic cryogenic tunnel

    NASA Technical Reports Server (NTRS)

    Murthy, A. V.

    1987-01-01

    A simplified fourwall interference assessment method has been described, and a computer program developed to facilitate correction of the airfoil data obtained in the Langley 0.3-m Transonic Cryogenic Tunnel (TCT). The procedure adopted is to first apply a blockage correction due to sidewall boundary-layer effects by various methods. The sidewall boundary-layer corrected data are then used to calculate the top and bottom wall interference effects by the method of Capallier, Chevallier and Bouinol, using the measured wall pressure distribution and the model force coefficients. The interference corrections obtained by the present method have been compared with other methods and found to give good agreement for the experimental data obtained in the TCT with slotted top and bottom walls.

  4. Test Operations Procedure (TOP) 01-1-020 Tropical Regions Environmental Considerations

    DTIC Science & Technology

    2013-02-08

    and Soldier systems tests. 1.3 Purpose. This is an overview TOP and is organized to provide background information on the humid tropical region...processes that foul materiel and interfere with equipment and systems under test. Military testing at present and in the future requires much greater...accordance with the Holdridge Life Zone Classification System18. This system is based on the theory that vegetation structure (type) is directly

  5. Coulomb-free and Coulomb-distorted recolliding quantum orbits in photoelectron holography

    NASA Astrophysics Data System (ADS)

    Maxwell, A. S.; Figueira de Morisson Faria, C.

    2018-06-01

    We perform a detailed analysis of the different types of orbits in the Coulomb quantum orbit strong-field approximation (CQSFA), ranging from direct to those undergoing hard collisions. We show that some of them exhibit clear counterparts in the standard formulations of the strong-field approximation for direct and rescattered above-threshold ionization, and show that the standard orbit classification commonly used in Coulomb-corrected models is over-simplified. We identify several types of rescattered orbits, such as those responsible for the low-energy structures reported in the literature, and determine the momentum regions in which they occur. We also find formerly overlooked interference patterns caused by backscattered Coulomb-corrected orbits and assess their effect on photoelectron angular distributions. These orbits improve the agreement of photoelectron angular distributions computed with the CQSFA with the outcome of ab initio methods for high energy phtotoelectrons perpendicular to the field polarization axis.

  6. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  7. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  8. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  9. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  10. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  11. Laser interference effect evaluation method based on character of laser-spot and image feature

    NASA Astrophysics Data System (ADS)

    Tang, Jianfeng; Luo, Xiaolin; Wu, Lingxia

    2016-10-01

    Evaluating the laser interference effect to CCD objectively and accurately has great research value. Starting from the change of the image's feature before and after interference, meanwhile, considering the influence of the laser-spot distribution character on the masking degree of the image feature information, a laser interference effect evaluation method based on character of laser-spot and image feature was proposed. It reflected the laser-spot distribution character using the distance between the center of the laser-spot and center of the target. It reflected the change of the global image feature using the changes of image's sparse coefficient matrix, which was obtained by the SSIM-inspired orthogonal matching pursuit (OMP) sparse coding algorithm. What's more, the assessment method reflected the change of the local image feature using the changes of the image's edge sharpness, which could be obtained by the change of the image's gradient magnitude. Taken together, the laser interference effect can be evaluated accurately. In terms of the laser interference experiment results, the proposed method shows good rationality and feasibility under the disturbing condition of different laser powers, and it can also overcome the inaccuracy caused by the change of the laser-spot position, realizing the evaluation of the laser interference effect objectively and accurately.

  12. Event identification by acoustic signature recognition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dress, W.B.; Kercel, S.W.

    1995-07-01

    Many events of interest to the security commnnity produce acoustic emissions that are, in principle, identifiable as to cause. Some obvious examples are gunshots, breaking glass, takeoffs and landings of small aircraft, vehicular engine noises, footsteps (high frequencies when on gravel, very low frequencies. when on soil), and voices (whispers to shouts). We are investigating wavelet-based methods to extract unique features of such events for classification and identification. We also discuss methods of classification and pattern recognition specifically tailored for acoustic signatures obtained by wavelet analysis. The paper is divided into three parts: completed work, work in progress, and futuremore » applications. The completed phase has led to the successful recognition of aircraft types on landing and takeoff. Both small aircraft (twin-engine turboprop) and large (commercial airliners) were included in the study. The project considered the design of a small, field-deployable, inexpensive device. The techniques developed during the aircraft identification phase were then adapted to a multispectral electromagnetic interference monitoring device now deployed in a nuclear power plant. This is a general-purpose wavelet analysis engine, spanning 14 octaves, and can be adapted for other specific tasks. Work in progress is focused on applying the methods previously developed to speaker identification. Some of the problems to be overcome include recognition of sounds as voice patterns and as distinct from possible background noises (e.g., music), as well as identification of the speaker from a short-duration voice sample. A generalization of the completed work and the work in progress is a device capable of classifying any number of acoustic events-particularly quasi-stationary events such as engine noises and voices and singular events such as gunshots and breaking glass. We will show examples of both kinds of events and discuss their recognition likelihood.« less

  13. Application of multiple signal classification algorithm to frequency estimation in coherent dual-frequency lidar

    NASA Astrophysics Data System (ADS)

    Li, Ruixiao; Li, Kun; Zhao, Changming

    2018-01-01

    Coherent dual-frequency Lidar (CDFL) is a new development of Lidar which dramatically enhances the ability to decrease the influence of atmospheric interference by using dual-frequency laser to measure the range and velocity with high precision. Based on the nature of CDFL signals, we propose to apply the multiple signal classification (MUSIC) algorithm in place of the fast Fourier transform (FFT) to estimate the phase differences in dual-frequency Lidar. In the presence of Gaussian white noise, the simulation results show that the signal peaks are more evident when using MUSIC algorithm instead of FFT in condition of low signal-noise-ratio (SNR), which helps to improve the precision of detection on range and velocity, especially for the long distance measurement systems.

  14. Numerical simulation of transonic compressor under circumferential inlet distortion and rotor/stator interference using harmonic balance method

    NASA Astrophysics Data System (ADS)

    Wang, Ziwei; Jiang, Xiong; Chen, Ti; Hao, Yan; Qiu, Min

    2018-05-01

    Simulating the unsteady flow of compressor under circumferential inlet distortion and rotor/stator interference would need full-annulus grid with a dual time method. This process is time consuming and needs a large amount of computational resources. Harmonic balance method simulates the unsteady flow in compressor on single passage grid with a series of steady simulations. This will largely increase the computational efficiency in comparison with the dual time method. However, most simulations with harmonic balance method are conducted on the flow under either circumferential inlet distortion or rotor/stator interference. Based on an in-house CFD code, the harmonic balance method is applied in the simulation of flow in the NASA Stage 35 under both circumferential inlet distortion and rotor/stator interference. As the unsteady flow is influenced by two different unsteady disturbances, it leads to the computational instability. The instability can be avoided by coupling the harmonic balance method with an optimizing algorithm. The computational result of harmonic balance method is compared with the result of full-annulus simulation. It denotes that, the harmonic balance method simulates the flow under circumferential inlet distortion and rotor/stator interference as precise as the full-annulus simulation with a speed-up of about 8 times.

  15. Association between personality traits and DSM-IV diagnosis of insomnia in perimenopausal women

    PubMed Central

    Sassoon, Stephanie A.; de Zambotti, Massimiliano; Colrain, Ian M.; Baker, Fiona C.

    2013-01-01

    Objective The aim of this study was to determine the role of personality factors in the development of DSM-IV insomnia coincident with perimenopause. Method Perimenopausal women (35 with DSM-IV insomnia and 28 with self-reported normal sleep) underwent clinical assessments and completed menopause-related questionnaires, the NEO-FFI, and the Structured Interview for DSM-IV Personality (SID-P). Logistic regressions determined whether personality factors and hot flash-related interference were associated with an insomnia diagnosis concurrent with the menopause transition. Results Women with insomnia reported higher neuroticism, lower agreeableness, and lower conscientiousness than controls on the NEO-FFI. Moreover, women with insomnia were more likely to meet DSM-IV criteria for Cluster C personality disorders, particularly obsessive compulsive personality disorder, on the SID-P. Women with insomnia were more likely to have had a past depressive episode and history of severe premenstrual symptoms. Findings from regressions revealed that higher neuroticism and greater interference from hot flashes were associated with insomnia classification even after controlling for history of depression, suggesting that sensitivity to hot flashes and a greater degree of neuroticism are independent contributors toward establishing which women are most likely to have sleep problems during perimenopause. Conclusions Findings show the relevance of personality factors, particularly neuroticism and obsessive-compulsive personality, in influencing a woman’s experience of insomnia as she goes through the menopause transition. PMID:24448105

  16. Chlorinated Cyanurates: Method Interferences and Application Implications

    EPA Science Inventory

    Experiments were conducted to investigate method interferences, residual stability, regulated DBP formation, and a water chemistry model associated with the use of Dichlor & Trichlor in drinking water.

  17. Resolution of common dietary sugars from probe sugars for test of intestinal permeability using capillary column gas chromatography.

    PubMed

    Farhadi, Ashkan; Keshavarzian, Ali; Fields, Jeremy Z; Sheikh, Maliha; Banan, Ali

    2006-05-19

    The most widely accepted method for the evaluation of intestinal barrier integrity is the measurement of the permeation of sugar probes following an oral test dose of sugars. The most-widely used sugar probes are sucrose, lactulose, mannitol and sucralose. Measuring these sugars using a sensitive gas chromatographic (GC) method, we noticed interference on the area of the lactulose and mannitol peaks. We tested different sugars to detect the possible makeup of these interferences and finally detected that the lactose interferes with lactulose peak and fructose interferes with mannitol peak. On further developing of our method, we were able to reasonably separate these peaks using different columns and condition for our assay. Sample preparation was rapid and simple and included adding internal standard sugars, derivitization and silylation. We used two chromatographic methods. In the first method we used Megabore column and had a run time of 34 min. This resulted in partial separation of the peaks. In the second method we used thin capillary column and was able to reasonably separate the lactose and lactulose peaks and the mannitol and fructose peaks with run time of 22 min. The sugar probes including mannitol, sucrose, lactulose, sucralose, fructose and lactose were detected precisely, without interference. The assay was linear between lactulose concentrations of 0.5 and 40 g/L (r(2)=1.000, P<0.0001) and mannitol concentrations of 0.01 and 40 g/L (r(2)=1.000). The sensitivity of this method remained high using new column and assay condition. The minimum detectable concentration calculated for both methods was 0.5 mg/L for lactulose and 1 mg/L for mannitol. This is the first report of interference of commonly used sugars with test of intestinal permeability. These sugars are found in most of fruits and dairy products and could easily interfere with the result of permeability tests. Our new GC assay of urine sugar probes permits the simultaneous quantitation of sucralose, sucrose, mannitol and lactulose, without interference with lactose and fructose. This assay is a rapid, simple, sensitive and reproducible method to accurately measure intestinal permeability.

  18. Interferometric Laser Scanner for Direction Determination

    PubMed Central

    Kaloshin, Gennady; Lukin, Igor

    2016-01-01

    In this paper, we explore the potential capabilities of new laser scanning-based method for direction determination. The method for fully coherent beams is extended to the case when interference pattern is produced in the turbulent atmosphere by two partially coherent sources. The performed theoretical analysis identified the conditions under which stable pattern may form on extended paths of 0.5–10 km in length. We describe a method for selecting laser scanner parameters, ensuring the necessary operability range in the atmosphere for any possible turbulence characteristics. The method is based on analysis of the mean intensity of interference pattern, formed by two partially coherent sources of optical radiation. Visibility of interference pattern is estimated as a function of propagation pathlength, structure parameter of atmospheric turbulence, and spacing of radiation sources, producing the interference pattern. It is shown that, when atmospheric turbulences are moderately strong, the contrast of interference pattern of laser scanner may ensure its applicability at ranges up to 10 km. PMID:26805841

  19. Interferometric Laser Scanner for Direction Determination.

    PubMed

    Kaloshin, Gennady; Lukin, Igor

    2016-01-21

    In this paper, we explore the potential capabilities of new laser scanning-based method for direction determination. The method for fully coherent beams is extended to the case when interference pattern is produced in the turbulent atmosphere by two partially coherent sources. The performed theoretical analysis identified the conditions under which stable pattern may form on extended paths of 0.5-10 km in length. We describe a method for selecting laser scanner parameters, ensuring the necessary operability range in the atmosphere for any possible turbulence characteristics. The method is based on analysis of the mean intensity of interference pattern, formed by two partially coherent sources of optical radiation. Visibility of interference pattern is estimated as a function of propagation pathlength, structure parameter of atmospheric turbulence, and spacing of radiation sources, producing the interference pattern. It is shown that, when atmospheric turbulences are moderately strong, the contrast of interference pattern of laser scanner may ensure its applicability at ranges up to 10 km.

  20. Power line interference attenuation in multi-channel sEMG signals: Algorithms and analysis.

    PubMed

    Soedirdjo, S D H; Ullah, K; Merletti, R

    2015-08-01

    Electromyogram (EMG) recordings are often corrupted by power line interference (PLI) even though the skin is prepared and well-designed instruments are used. This study focuses on the analysis of some of the recent and classical existing digital signal processing approaches have been used to attenuate, if not eliminate, the power line interference from EMG signals. A comparison of the signal to interference ratio (SIR) of the output signals is presented, for four methods: classical notch filter, spectral interpolation, adaptive noise canceller with phase locked loop (ANC-PLL) and adaptive filter, applied to simulated multichannel monopolar EMG signals with different SIR. The effect of each method on the shape of the EMG signals is also analyzed. The results show that ANC-PLL method gives the best output SIR and lowest shape distortion compared to the other methods. Classical notch filtering is the simplest method but some information might be lost as it removes both the interference and the EMG signals. Thus, it is obvious that notch filter has the lowest performance and it introduces distortion into the resulting signals.

  1. Visualizing the Solute Vaporization Interference in Flame Atomic Absorption Spectroscopy

    ERIC Educational Resources Information Center

    Dockery, Christopher R.; Blew, Michael J.; Goode, Scott R.

    2008-01-01

    Every day, tens of thousands of chemists use analytical atomic spectroscopy in their work, often without knowledge of possible interferences. We present a unique approach to study these interferences by using modern response surface methods to visualize an interference in which aluminum depresses the calcium atomic absorption signal. Calcium…

  2. Compensation method of cloud infrared radiation interference based on a spinning projectile's attitude measurement

    NASA Astrophysics Data System (ADS)

    Xu, Miaomiao; Bu, Xiongzhu; Yu, Jing; He, Zilu

    2018-01-01

    Based on the study of earth infrared radiation and further requirement of anticloud interference ability for a spinning projectile's infrared attitude measurement, a compensation method of cloud infrared radiation interference is proposed. First, the theoretical model of infrared radiation interference is established by analyzing the generation mechanism and interference characteristics of cloud infrared radiation. Then, the influence of cloud infrared radiation on attitude angle is calculated in the following two situations. The first situation is the projectile in cloud, and the maximum of roll angle error can reach ± 20 deg. The second situation is the projectile outside of cloud, and it results in the inability to measure the projectile's attitude angle. Finally, a multisensor weighted fusion algorithm is proposed based on trust function method to reduce the influence of cloud infrared radiation. The results of semiphysical experiments show that the error of roll angle with a weighted fusion algorithm can be kept within ± 0.5 deg in the presence of cloud infrared radiation interference. This proposed method improves the accuracy of roll angle by nearly four times in attitude measurement and also solves the problem of low accuracy of infrared radiation attitude measurement in navigation and guidance field.

  3. Proactive control of proactive interference using the method of loci.

    PubMed

    Bass, Willa S; Oswald, Karl M

    2014-01-01

    Proactive interferencebuilds up with exposure to multiple lists of similar items with a resulting reduction in recall. This study examined the effectiveness of using a proactive strategy of the method of loci to reduce proactive interference in a list recall paradigm of categorically similar words. While all participants reported using some form of strategy to recall list words, this study demonstrated that young adults were able to proactively use the method of loci after 25 min of instruction to reduce proactive interference as compared with other personal spontaneous strategies. The implications of this study are that top-down proactive strategies such as the method of loci can significantly reduce proactive interference, and that the use of image and sequence or location are especially useful in this regard.

  4. Proactive control of proactive interference using the method of loci

    PubMed Central

    Bass, Willa S.; Oswald, Karl M.

    2014-01-01

    Proactive interferencebuilds up with exposure to multiple lists of similar items with a resulting reduction in recall. This study examined the effectiveness of using a proactive strategy of the method of loci to reduce proactive interference in a list recall paradigm of categorically similar words. While all participants reported using some form of strategy to recall list words, this study demonstrated that young adults were able to proactively use the method of loci after 25 min of instruction to reduce proactive interference as compared with other personal spontaneous strategies. The implications of this study are that top-down proactive strategies such as the method of loci can significantly reduce proactive interference, and that the use of image and sequence or location are especially useful in this regard. PMID:25157300

  5. Ramsey method for Auger-electron interference induced by an attosecond twin pulse

    NASA Astrophysics Data System (ADS)

    Buth, Christian; Schafer, Kenneth J.

    2015-02-01

    We examine the archetype of an interference experiment for Auger electrons: two electron wave packets are launched by inner-shell ionizing a krypton atom using two attosecond light pulses with a variable time delay. This setting is an attosecond realization of the Ramsey method of separated oscillatory fields. Interference of the two ejected Auger-electron wave packets is predicted, indicating that the coherence between the two pulses is passed to the Auger electrons. For the detection of the interference pattern an accurate coincidence measurement of photo- and Auger electrons is necessary. The method allows one to control inner-shell electron dynamics on an attosecond timescale and represents a sensitive indicator for decoherence.

  6. Discrimination of biological and chemical threat simulants in residue mixtures on multiple substrates.

    PubMed

    Gottfried, Jennifer L

    2011-07-01

    The potential of laser-induced breakdown spectroscopy (LIBS) to discriminate biological and chemical threat simulant residues prepared on multiple substrates and in the presence of interferents has been explored. The simulant samples tested include Bacillus atrophaeus spores, Escherichia coli, MS-2 bacteriophage, α-hemolysin from Staphylococcus aureus, 2-chloroethyl ethyl sulfide, and dimethyl methylphosphonate. The residue samples were prepared on polycarbonate, stainless steel and aluminum foil substrates by Battelle Eastern Science and Technology Center. LIBS spectra were collected by Battelle on a portable LIBS instrument developed by A3 Technologies. This paper presents the chemometric analysis of the LIBS spectra using partial least-squares discriminant analysis (PLS-DA). The performance of PLS-DA models developed based on the full LIBS spectra, and selected emission intensities and ratios have been compared. The full-spectra models generally provided better classification results based on the inclusion of substrate emission features; however, the intensity/ratio models were able to correctly identify more types of simulant residues in the presence of interferents. The fusion of the two types of PLS-DA models resulted in a significant improvement in classification performance for models built using multiple substrates. In addition to identifying the major components of residue mixtures, minor components such as growth media and solvents can be identified with an appropriately designed PLS-DA model.

  7. Research on Classification of Chinese Text Data Based on SVM

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Yu, Hongzhi; Wan, Fucheng; Xu, Tao

    2017-09-01

    Data Mining has important application value in today’s industry and academia. Text classification is a very important technology in data mining. At present, there are many mature algorithms for text classification. KNN, NB, AB, SVM, decision tree and other classification methods all show good classification performance. Support Vector Machine’ (SVM) classification method is a good classifier in machine learning research. This paper will study the classification effect based on the SVM method in the Chinese text data, and use the support vector machine method in the chinese text to achieve the classify chinese text, and to able to combination of academia and practical application.

  8. Supervised DNA Barcodes species classification: analysis, comparisons and results

    PubMed Central

    2014-01-01

    Background Specific fragments, coming from short portions of DNA (e.g., mitochondrial, nuclear, and plastid sequences), have been defined as DNA Barcode and can be used as markers for organisms of the main life kingdoms. Species classification with DNA Barcode sequences has been proven effective on different organisms. Indeed, specific gene regions have been identified as Barcode: COI in animals, rbcL and matK in plants, and ITS in fungi. The classification problem assigns an unknown specimen to a known species by analyzing its Barcode. This task has to be supported with reliable methods and algorithms. Methods In this work the efficacy of supervised machine learning methods to classify species with DNA Barcode sequences is shown. The Weka software suite, which includes a collection of supervised classification methods, is adopted to address the task of DNA Barcode analysis. Classifier families are tested on synthetic and empirical datasets belonging to the animal, fungus, and plant kingdoms. In particular, the function-based method Support Vector Machines (SVM), the rule-based RIPPER, the decision tree C4.5, and the Naïve Bayes method are considered. Additionally, the classification results are compared with respect to ad-hoc and well-established DNA Barcode classification methods. Results A software that converts the DNA Barcode FASTA sequences to the Weka format is released, to adapt different input formats and to allow the execution of the classification procedure. The analysis of results on synthetic and real datasets shows that SVM and Naïve Bayes outperform on average the other considered classifiers, although they do not provide a human interpretable classification model. Rule-based methods have slightly inferior classification performances, but deliver the species specific positions and nucleotide assignments. On synthetic data the supervised machine learning methods obtain superior classification performances with respect to the traditional DNA Barcode classification methods. On empirical data their classification performances are at a comparable level to the other methods. Conclusions The classification analysis shows that supervised machine learning methods are promising candidates for handling with success the DNA Barcoding species classification problem, obtaining excellent performances. To conclude, a powerful tool to perform species identification is now available to the DNA Barcoding community. PMID:24721333

  9. Polarization Smoothing Generalized MUSIC Algorithm with Polarization Sensitive Array for Low Angle Estimation.

    PubMed

    Tan, Jun; Nie, Zaiping

    2018-05-12

    Direction of Arrival (DOA) estimation of low-altitude targets is difficult due to the multipath coherent interference from the ground reflection image of the targets, especially for very high frequency (VHF) radars, which have antennae that are severely restricted in terms of aperture and height. The polarization smoothing generalized multiple signal classification (MUSIC) algorithm, which combines polarization smoothing and generalized MUSIC algorithm for polarization sensitive arrays (PSAs), was proposed to solve this problem in this paper. Firstly, the polarization smoothing pre-processing was exploited to eliminate the coherence between the direct and the specular signals. Secondly, we constructed the generalized MUSIC algorithm for low angle estimation. Finally, based on the geometry information of the symmetry multipath model, the proposed algorithm was introduced to convert the two-dimensional searching into one-dimensional searching, thus reducing the computational burden. Numerical results were provided to verify the effectiveness of the proposed method, showing that the proposed algorithm has significantly improved angle estimation performance in the low-angle area compared with the available methods, especially when the grazing angle is near zero.

  10. Image analysis for material characterisation

    NASA Astrophysics Data System (ADS)

    Livens, Stefan

    In this thesis, a number of image analysis methods are presented as solutions to two applications concerning the characterisation of materials. Firstly, we deal with the characterisation of corrosion images, which is handled using a multiscale texture analysis method based on wavelets. We propose a feature transformation that deals with the problem of rotation invariance. Classification is performed with a Learning Vector Quantisation neural network and with combination of outputs. In an experiment, 86,2% of the images showing either pit formation or cracking, are correctly classified. Secondly, we develop an automatic system for the characterisation of silver halide microcrystals. These are flat crystals with a triangular or hexagonal base and a thickness in the 100 to 200 nm range. A light microscope is used to image them. A novel segmentation method is proposed, which allows to separate agglomerated crystals. For the measurement of shape, the ratio between the largest and the smallest radius yields the best results. The thickness measurement is based on the interference colours that appear for light reflected at the crystals. The mean colour of different thickness populations is determined, from which a calibration curve is derived. With this, the thickness of new populations can be determined accurately.

  11. Plume radiation

    NASA Astrophysics Data System (ADS)

    Dirscherl, R.

    1993-06-01

    The electromagnetic radiation originating from the exhaust plume of tactical missile motors is of outstanding importance for military system designers. Both missile- and countermeasure engineer rely on the knowledge of plume radiation properties, be it for guidance/interference control or for passive detection of adversary missiles. To allow access to plume radiation properties, they are characterized with respect to the radiation producing mechanisms like afterburning, its chemical constituents, and reactions as well as particle radiation. A classification of plume spectral emissivity regions is given due to the constraints imposed by available sensor technology and atmospheric propagation windows. Additionally assessment methods are presented that allow a common and general grouping of rocket motor properties into various categories. These methods describe state of the art experimental evaluation techniques as well as calculation codes that are most commonly used by developers of NATO countries. Dominant aspects influencing plume radiation are discussed and a standardized test technique is proposed for the assessment of plume radiation properties that include prediction procedures. These recommendations on terminology and assessment methods should be common to all employers of plume radiation. Special emphasis is put on the omnipresent need for self-protection by the passive detection of plume radiation in the ultraviolet (UV) and infrared (IR) spectral band.

  12. Joint Concept Correlation and Feature-Concept Relevance Learning for Multilabel Classification.

    PubMed

    Zhao, Xiaowei; Ma, Zhigang; Li, Zhi; Li, Zhihui

    2018-02-01

    In recent years, multilabel classification has attracted significant attention in multimedia annotation. However, most of the multilabel classification methods focus only on the inherent correlations existing among multiple labels and concepts and ignore the relevance between features and the target concepts. To obtain more robust multilabel classification results, we propose a new multilabel classification method aiming to capture the correlations among multiple concepts by leveraging hypergraph that is proved to be beneficial for relational learning. Moreover, we consider mining feature-concept relevance, which is often overlooked by many multilabel learning algorithms. To better show the feature-concept relevance, we impose a sparsity constraint on the proposed method. We compare the proposed method with several other multilabel classification methods and evaluate the classification performance by mean average precision on several data sets. The experimental results show that the proposed method outperforms the state-of-the-art methods.

  13. Object-Based Random Forest Classification of Land Cover from Remotely Sensed Imagery for Industrial and Mining Reclamation

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Luo, M.; Xu, L.; Zhou, X.; Ren, J.; Zhou, J.

    2018-04-01

    The RF method based on grid-search parameter optimization could achieve a classification accuracy of 88.16 % in the classification of images with multiple feature variables. This classification accuracy was higher than that of SVM and ANN under the same feature variables. In terms of efficiency, the RF classification method performs better than SVM and ANN, it is more capable of handling multidimensional feature variables. The RF method combined with object-based analysis approach could highlight the classification accuracy further. The multiresolution segmentation approach on the basis of ESP scale parameter optimization was used for obtaining six scales to execute image segmentation, when the segmentation scale was 49, the classification accuracy reached the highest value of 89.58 %. The classification accuracy of object-based RF classification was 1.42 % higher than that of pixel-based classification (88.16 %), and the classification accuracy was further improved. Therefore, the RF classification method combined with object-based analysis approach could achieve relatively high accuracy in the classification and extraction of land use information for industrial and mining reclamation areas. Moreover, the interpretation of remotely sensed imagery using the proposed method could provide technical support and theoretical reference for remotely sensed monitoring land reclamation.

  14. Direct calculation of wall interferences and wall adaptation for two-dimensional flow in wind tunnels with closed walls

    NASA Technical Reports Server (NTRS)

    Amecke, Juergen

    1986-01-01

    A method for the direct calculation of the wall induced interference velocity in two dimensional flow based on Cauchy's integral formula was derived. This one-step method allows the calculation of the residual corrections and the required wall adaptation for interference-free flow starting from the wall pressure distribution without any model representation. Demonstrated applications are given.

  15. An advanced method for classifying atmospheric circulation types based on prototypes connectivity graph

    NASA Astrophysics Data System (ADS)

    Zagouras, Athanassios; Argiriou, Athanassios A.; Flocas, Helena A.; Economou, George; Fotopoulos, Spiros

    2012-11-01

    Classification of weather maps at various isobaric levels as a methodological tool is used in several problems related to meteorology, climatology, atmospheric pollution and to other fields for many years. Initially the classification was performed manually. The criteria used by the person performing the classification are features of isobars or isopleths of geopotential height, depending on the type of maps to be classified. Although manual classifications integrate the perceptual experience and other unquantifiable qualities of the meteorology specialists involved, these are typically subjective and time consuming. Furthermore, during the last years different approaches of automated methods for atmospheric circulation classification have been proposed, which present automated and so-called objective classifications. In this paper a new method of atmospheric circulation classification of isobaric maps is presented. The method is based on graph theory. It starts with an intelligent prototype selection using an over-partitioning mode of fuzzy c-means (FCM) algorithm, proceeds to a graph formulation for the entire dataset and produces the clusters based on the contemporary dominant sets clustering method. Graph theory is a novel mathematical approach, allowing a more efficient representation of spatially correlated data, compared to the classical Euclidian space representation approaches, used in conventional classification methods. The method has been applied to the classification of 850 hPa atmospheric circulation over the Eastern Mediterranean. The evaluation of the automated methods is performed by statistical indexes; results indicate that the classification is adequately comparable with other state-of-the-art automated map classification methods, for a variable number of clusters.

  16. [Spatial domain display for interference image dataset].

    PubMed

    Wang, Cai-Ling; Li, Yu-Shan; Liu, Xue-Bin; Hu, Bing-Liang; Jing, Juan-Juan; Wen, Jia

    2011-11-01

    The requirements of imaging interferometer visualization is imminent for the user of image interpretation and information extraction. However, the conventional researches on visualization only focus on the spectral image dataset in spectral domain. Hence, the quick show of interference spectral image dataset display is one of the nodes in interference image processing. The conventional visualization of interference dataset chooses classical spectral image dataset display method after Fourier transformation. In the present paper, the problem of quick view of interferometer imager in image domain is addressed and the algorithm is proposed which simplifies the matter. The Fourier transformation is an obstacle since its computation time is very large and the complexion would be even deteriorated with the size of dataset increasing. The algorithm proposed, named interference weighted envelopes, makes the dataset divorced from transformation. The authors choose three interference weighted envelopes respectively based on the Fourier transformation, features of interference data and human visual system. After comparing the proposed with the conventional methods, the results show the huge difference in display time.

  17. Interference thinking in constructing students’ knowledge to solve mathematical problems

    NASA Astrophysics Data System (ADS)

    Jayanti, W. E.; Usodo, B.; Subanti, S.

    2018-04-01

    This research aims to describe interference thinking in constructing students’ knowledge to solve mathematical problems. Interference thinking in solving problems occurs when students have two concepts that interfere with each other’s concept. Construction of problem-solving can be traced using Piaget’s assimilation and accommodation framework, helping to know the students’ thinking structures in solving the problems. The method of this research was a qualitative method with case research strategy. The data in this research involving problem-solving result and transcripts of interviews about students’ errors in solving the problem. The results of this research focus only on the student who experience proactive interference, where student in solving a problem using old information to interfere with the ability to recall new information. The student who experience interference thinking in constructing their knowledge occurs when the students’ thinking structures in the assimilation and accommodation process are incomplete. However, after being given reflection to the student, then the students’ thinking process has reached equilibrium condition even though the result obtained remains wrong.

  18. Instructional Method Classifications Lack User Language and Orientation

    ERIC Educational Resources Information Center

    Neumann, Susanne; Koper, Rob

    2010-01-01

    Following publications emphasizing the need of a taxonomy for instructional methods, this article presents a literature review on classifications for learning and teaching in order to identify possible classifications for instructional methods. Data was collected for 37 classifications capturing the origins, theoretical underpinnings, purposes and…

  19. Best Merge Region Growing with Integrated Probabilistic Classification for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2011-01-01

    A new method for spectral-spatial classification of hyperspectral images is proposed. The method is based on the integration of probabilistic classification within the hierarchical best merge region growing algorithm. For this purpose, preliminary probabilistic support vector machines classification is performed. Then, hierarchical step-wise optimization algorithm is applied, by iteratively merging regions with the smallest Dissimilarity Criterion (DC). The main novelty of this method consists in defining a DC between regions as a function of region statistical and geometrical features along with classification probabilities. Experimental results are presented on a 200-band AVIRIS image of the Northwestern Indiana s vegetation area and compared with those obtained by recently proposed spectral-spatial classification techniques. The proposed method improves classification accuracies when compared to other classification approaches.

  20. Evaluation of bilirubin interference and accuracy of six creatinine assays compared with isotope dilution-liquid chromatography mass spectrometry.

    PubMed

    Nah, Hyunjin; Lee, Sang-Guk; Lee, Kyeong-Seob; Won, Jae-Hee; Kim, Hyun Ok; Kim, Jeong-Ho

    2016-02-01

    The aim of this study was to estimate bilirubin interference and accuracy of six routine methods for measuring creatinine compared with isotope dilution-liquid chromatography mass spectrometry (ID-LC/MS). A total of 40 clinical serum samples from 31 patients with serum total bilirubin concentration >68.4μmol/L were collected. Serum creatinine was measured using two enzymatic reagents and four Jaffe reagents as well as ID-LC/MS. Correlations between bilirubin concentration and percent difference in creatinine compared with ID-LC/MS were analyzed to investigate bilirubin interference. Bias estimations between the six reagents and ID-LC/MS were performed. Recovery tests using National Institute of Standards and Technology (NIST) Standard Reference Material (SRM) 967a were also performed. Both the enzymatic methods showed no bilirubin interference. However, three of the four Jaffe methods demonstrated significant bilirubin concentration-dependent interference in samples with creatinine levels <53μmol/L, and two of them showed significant bilirubin interference in samples with creatinine levels ranging from 53.0 to 97.2μmol/L. Comparison of these methods with ID-LC/MS using patients' samples with elevated bilirubin revealed that the tested methods failed to achieve the bias goal at especially low levels of creatinine. In addition, recovery test using NIST SRM 967a showed that bias in one Jaffe method and two enzymatic methods did not achieve the bias goal at either low or high level of creatinine, indicating they had calibration bias. One enzymatic method failed to achieve all the bias goals in both comparison experiment and recovery test. It is important to understand that both bilirubin interference and calibration traceability to ID-LC/MS should be considered to improve the accuracy of creatinine measurement. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  1. TPH detection in groundwater: Identification and elimination of positive interferences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemo, D.A.; Synowiec, K.A.

    1996-01-01

    Groundwater assessment programs frequently require total petroleum hydrocarbon (TPH) analyses (Methods 8015M and 418.1). TPH analyses are often unreliable indicators of water quality because these methods are not constituent-specific and are vulnerable to significant sources of positive interferences. These positive interferences include: (a) non-dissolved petroleum constituents; (b) soluble, non-petroleum hydrocarbons (e.g., biodegradation products); and (c) turbidity, commonly introduced into water samples during sample collection. In this paper, we show that the portion of a TPH concentration not directly the result of water-soluble petroleum constituents can be attributed solely to these positive interferences. To demonstrate the impact of these interferences, wemore » conducted a field experiment at a site affected by degraded crude oil. Although TPH was consistently detected in groundwater samples, BTEX was not detected. PNAs were not detected, except for very low concentrations of fluorene (<5 ug/1). Filtering and silica gel cleanup steps were added to sampling and analyses to remove particulates and biogenic by-products. Results showed that filtering lowered the Method 8015M concentrations and reduced the Method 418.1 concentrations to non-detectable. Silica gel cleanup reduced the Method 8015M concentrations to non-detectable. We conclude from this study that the TPH results from groundwater samples are artifacts of positive interferences caused by both particulates and biogenic materials and do not represent dissolved-phase petroleum constituents.« less

  2. TPH detection in groundwater: Identification and elimination of positive interferences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemo, D.A.; Synowiec, K.A.

    1996-12-31

    Groundwater assessment programs frequently require total petroleum hydrocarbon (TPH) analyses (Methods 8015M and 418.1). TPH analyses are often unreliable indicators of water quality because these methods are not constituent-specific and are vulnerable to significant sources of positive interferences. These positive interferences include: (a) non-dissolved petroleum constituents; (b) soluble, non-petroleum hydrocarbons (e.g., biodegradation products); and (c) turbidity, commonly introduced into water samples during sample collection. In this paper, we show that the portion of a TPH concentration not directly the result of water-soluble petroleum constituents can be attributed solely to these positive interferences. To demonstrate the impact of these interferences, wemore » conducted a field experiment at a site affected by degraded crude oil. Although TPH was consistently detected in groundwater samples, BTEX was not detected. PNAs were not detected, except for very low concentrations of fluorene (<5 ug/1). Filtering and silica gel cleanup steps were added to sampling and analyses to remove particulates and biogenic by-products. Results showed that filtering lowered the Method 8015M concentrations and reduced the Method 418.1 concentrations to non-detectable. Silica gel cleanup reduced the Method 8015M concentrations to non-detectable. We conclude from this study that the TPH results from groundwater samples are artifacts of positive interferences caused by both particulates and biogenic materials and do not represent dissolved-phase petroleum constituents.« less

  3. Subspace-based interference removal methods for a multichannel biomagnetic sensor array.

    PubMed

    Sekihara, Kensuke; Nagarajan, Srikantan S

    2017-10-01

    In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.

  4. Subspace-based interference removal methods for a multichannel biomagnetic sensor array

    NASA Astrophysics Data System (ADS)

    Sekihara, Kensuke; Nagarajan, Srikantan S.

    2017-10-01

    Objective. In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. Approach. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.

  5. Parallel interference cancellation for CDMA applications

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Simon, Marvin K. (Inventor); Raphaeli, Dan (Inventor)

    1997-01-01

    The present invention provides a method of decoding a spread spectrum composite signal, the composite signal comprising plural user signals that have been spread with plural respective codes, wherein each coded signal is despread, averaged to produce a signal value, analyzed to produce a tentative decision, respread, summed with other respread signals to produce combined interference signals, the method comprising scaling the combined interference signals with a weighting factor to produce a scaled combined interference signal, scaling the composite signal with the weighting factor to produce a scaled composite signal, scaling the signal value by the complement of the weighting factor to produce a leakage signal, combining the scaled composite signal, the scaled combined interference signal and the leakage signal to produce an estimate of a respective user signal.

  6. 7 CFR 28.35 - Method of classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Method of classification. 28.35 Section 28.35 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... Classification § 28.35 Method of classification. All cotton samples shall be classified on the basis of the...

  7. A Simple Network to Remove Interference in Surface EMG Signal from Single Gene Affected Phenylketonuria Patients for Proper Diagnosis

    NASA Astrophysics Data System (ADS)

    Mohanty, Madhusmita; Basu, Mousumi; Pattanayak, Deba Narayan; Mohapatra, Sumant Kumar

    2018-04-01

    Recently Autosomal Recessive Single Gene (ARSG) diseases are highly effective to the children within the age of 5-10 years. One of the most ARSG disease is a Phenylketonuria (PKU). This single gene disease is associated with mutations in the gene that encodes the enzyme phenylalanine hydroxylase (PAH, Gene 612349). Through this mutation process, PAH of the gene affected patient can not properly manufacture PAH as a result the patients suffer from decreased muscle tone which shows abnormality in EMG signal. Here the extraction of the quality of the PKU affected EMG (PKU-EMG) signal is a keen interest, so it is highly necessary to remove the added ECG signal as well as the biological and instrumental noises. In the Present paper we proposed a method for detection and classification of the PKU affected EMG signal. Here Discrete Wavelet Transformation is implemented for extraction of the features of the PKU affected EMG signal. Adaptive Neuro-Fuzzy Inference System (ANFIS) network is used for the classification of the signal. Modified Particle Swarm Optimization (MPSO) and Modified Genetic Algorithm (MGA) are used to train the ANFIS network. Simulation result shows that the proposed method gives better performance as compared to existing approaches. Also it gives better accuracy of 98.02% for the detection of PKU-EMG signal. The advantages of the proposed model is to use MGA and MPSO to train the parameters of ANFIS network for classification of ECG and EMG signal of PKU affected patients. The proposed method obtained the high SNR (18.13 ± 0.36 dB), SNR (0.52 ± 1.62 dB), RE (0.02 ± 0.32), MSE (0.64 ± 2.01), CC (0.99 ± 0.02), RMSE (0.75 ± 0.35) and MFRE (0.01 ± 0.02), RMSE (0.75 ± 0.35) and MFRE (0.01 ± 0.02). From authors knowledge, this is the first time a composite method is used for diagnosis of PKU affected patients. The accuracy (98.02%), sensitivity (100%) and specificity (98.59%) helps for proper clinical treatment. It can help for readers/researchers to improve the aforesaid performance for future prospective.

  8. [Haemolysis and turbidity influence on three analysis methods of quantitative determination of total and conjugated bilirubin on ADVIA 1650].

    PubMed

    Gobert De Paepe, E; Munteanu, G; Schischmanoff, P O; Porquet, D

    2008-01-01

    Plasma bilirubin testing is crucial to prevent the occurrence of neonatal kernicterus. Haemolysis may occur during sampling and interfere with bilirubin determination. Moreover, lipidic infusions may induce plasma lipemia and also interfere with bilirubin measurement. We evaluated the interference of haemolysis and lipemia with three methods of total and direct bilirubin measurement adaptated on an Advia 1650 analyser (Siemens Medical Solutions Diagnostics) : Synermed (Sofibel), Bilirubin 2 (Siemens) and Bilirubin Auto FS (Diasys). The measurement of total bilirubin was little affected by haemolysis with all three methods. The Bilirubin 2 (Siemens) method was the less sensitive to haemolysis even at low bilirubin levels. The measurement of conjugated bilirubin was significantly altered by low heamoglobin concentrations for Bilirubin Auto FS(R) (30 microM or 0,192 g/100 mL haemoglobin) and for Synermed (60 microM or 0,484 g/100 mL haemoglobin). In marked contrast, we found no haemoglobin interference with the Direct Bilirubin 2 reagent which complied with the method validation criteria from the French Society for Biological Chemistry. The lipemia up to 2 g/L of Ivelip did not affect neither the measurement of total bilirubin for all three methods nor the measurement of conjugated bilirubin with the Diasys and Siemens reagents. However, we observed a strong interference starting at 0,5 g/L of Ivelip with the Synermed reagent. Our data suggest that both Siemens and Diasys methods allow to measure accurately total and conjugated bilirubin in hemolytic and lipemic samples, nevertheless, the Siemens methodology is less affected by these interferences.

  9. Characterizing Interference in Radio Astronomy Observations through Active and Unsupervised Learning

    NASA Technical Reports Server (NTRS)

    Doran, G.

    2013-01-01

    In the process of observing signals from astronomical sources, radio astronomers must mitigate the effects of manmade radio sources such as cell phones, satellites, aircraft, and observatory equipment. Radio frequency interference (RFI) often occurs as short bursts (< 1 ms) across a broad range of frequencies, and can be confused with signals from sources of interest such as pulsars. With ever-increasing volumes of data being produced by observatories, automated strategies are required to detect, classify, and characterize these short "transient" RFI events. We investigate an active learning approach in which an astronomer labels events that are most confusing to a classifier, minimizing the human effort required for classification. We also explore the use of unsupervised clustering techniques, which automatically group events into classes without user input. We apply these techniques to data from the Parkes Multibeam Pulsar Survey to characterize several million detected RFI events from over a thousand hours of observation.

  10. A Comparison of Two-Group Classification Methods

    ERIC Educational Resources Information Center

    Holden, Jocelyn E.; Finch, W. Holmes; Kelley, Ken

    2011-01-01

    The statistical classification of "N" individuals into "G" mutually exclusive groups when the actual group membership is unknown is common in the social and behavioral sciences. The results of such classification methods often have important consequences. Among the most common methods of statistical classification are linear discriminant analysis,…

  11. Infrared differential-absorption Mueller matrix spectroscopy and neural network-based data fusion for biological aerosol standoff detection.

    PubMed

    Carrieri, Arthur H; Copper, Jack; Owens, David J; Roese, Erik S; Bottiger, Jerold R; Everly, Robert D; Hung, Kevin C

    2010-01-20

    An active spectrophotopolarimeter sensor and support system were developed for a military/civilian defense feasibility study concerning the identification and standoff detection of biological aerosols. Plumes of warfare agent surrogates gamma-irradiated Bacillus subtilis and chicken egg white albumen (analytes), Arizona road dust (terrestrial interferent), water mist (atmospheric interferent), and talcum powders (experiment controls) were dispersed inside windowless chambers and interrogated by multiple CO(2) laser beams spanning 9.1-12.0 microm wavelengths (lambda). Molecular vibration and vibration-rotation activities by the subject analyte are fundamentally strong within this "fingerprint" middle infrared spectral region. Distinct polarization-modulations of incident irradiance and backscatter radiance of tuned beams generate the Mueller matrix (M) of subject aerosol. Strings of all 15 normalized elements {M(ij)(lambda)/M(11)(lambda)}, which completely describe physical and geometric attributes of the aerosol particles, are input fields for training hybrid Kohonen self-organizing map feed-forward artificial neural networks (ANNs). The properly trained and validated ANN model performs pattern recognition and type-classification tasks via internal mappings. A typical ANN that mathematically clusters analyte, interferent, and control aerosols with nil overlap of species is illustrated, including sensitivity analysis of performance.

  12. ISBDD Model for Classification of Hyperspectral Remote Sensing Imagery

    PubMed Central

    Li, Na; Xu, Zhaopeng; Zhao, Huijie; Huang, Xinchen; Drummond, Jane; Wang, Daming

    2018-01-01

    The diverse density (DD) algorithm was proposed to handle the problem of low classification accuracy when training samples contain interference such as mixed pixels. The DD algorithm can learn a feature vector from training bags, which comprise instances (pixels). However, the feature vector learned by the DD algorithm cannot always effectively represent one type of ground cover. To handle this problem, an instance space-based diverse density (ISBDD) model that employs a novel training strategy is proposed in this paper. In the ISBDD model, DD values of each pixel are computed instead of learning a feature vector, and as a result, the pixel can be classified according to its DD values. Airborne hyperspectral data collected by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor and the Push-broom Hyperspectral Imager (PHI) are applied to evaluate the performance of the proposed model. Results show that the overall classification accuracy of ISBDD model on the AVIRIS and PHI images is up to 97.65% and 89.02%, respectively, while the kappa coefficient is up to 0.97 and 0.88, respectively. PMID:29510547

  13. Model-based Bayesian signal extraction algorithm for peripheral nerves

    NASA Astrophysics Data System (ADS)

    Eggers, Thomas E.; Dweiri, Yazan M.; McCallum, Grant A.; Durand, Dominique M.

    2017-10-01

    Objective. Multi-channel cuff electrodes have recently been investigated for extracting fascicular-level motor commands from mixed neural recordings. Such signals could provide volitional, intuitive control over a robotic prosthesis for amputee patients. Recent work has demonstrated success in extracting these signals in acute and chronic preparations using spatial filtering techniques. These extracted signals, however, had low signal-to-noise ratios and thus limited their utility to binary classification. In this work a new algorithm is proposed which combines previous source localization approaches to create a model based method which operates in real time. Approach. To validate this algorithm, a saline benchtop setup was created to allow the precise placement of artificial sources within a cuff and interference sources outside the cuff. The artificial source was taken from five seconds of chronic neural activity to replicate realistic recordings. The proposed algorithm, hybrid Bayesian signal extraction (HBSE), is then compared to previous algorithms, beamforming and a Bayesian spatial filtering method, on this test data. An example chronic neural recording is also analyzed with all three algorithms. Main results. The proposed algorithm improved the signal to noise and signal to interference ratio of extracted test signals two to three fold, as well as increased the correlation coefficient between the original and recovered signals by 10-20%. These improvements translated to the chronic recording example and increased the calculated bit rate between the recovered signals and the recorded motor activity. Significance. HBSE significantly outperforms previous algorithms in extracting realistic neural signals, even in the presence of external noise sources. These results demonstrate the feasibility of extracting dynamic motor signals from a multi-fascicled intact nerve trunk, which in turn could extract motor command signals from an amputee for the end goal of controlling a prosthetic limb.

  14. A new hierarchical method for inter-patient heartbeat classification using random projections and RR intervals

    PubMed Central

    2014-01-01

    Background The inter-patient classification schema and the Association for the Advancement of Medical Instrumentation (AAMI) standards are important to the construction and evaluation of automated heartbeat classification systems. The majority of previously proposed methods that take the above two aspects into consideration use the same features and classification method to classify different classes of heartbeats. The performance of the classification system is often unsatisfactory with respect to the ventricular ectopic beat (VEB) and supraventricular ectopic beat (SVEB). Methods Based on the different characteristics of VEB and SVEB, a novel hierarchical heartbeat classification system was constructed. This was done in order to improve the classification performance of these two classes of heartbeats by using different features and classification methods. First, random projection and support vector machine (SVM) ensemble were used to detect VEB. Then, the ratio of the RR interval was compared to a predetermined threshold to detect SVEB. The optimal parameters for the classification models were selected on the training set and used in the independent testing set to assess the final performance of the classification system. Meanwhile, the effect of different lead configurations on the classification results was evaluated. Results Results showed that the performance of this classification system was notably superior to that of other methods. The VEB detection sensitivity was 93.9% with a positive predictive value of 90.9%, and the SVEB detection sensitivity was 91.1% with a positive predictive value of 42.2%. In addition, this classification process was relatively fast. Conclusions A hierarchical heartbeat classification system was proposed based on the inter-patient data division to detect VEB and SVEB. It demonstrated better classification performance than existing methods. It can be regarded as a promising system for detecting VEB and SVEB of unknown patients in clinical practice. PMID:24981916

  15. Cell segmentation in phase contrast microscopy images via semi-supervised classification over optics-related features.

    PubMed

    Su, Hang; Yin, Zhaozheng; Huh, Seungil; Kanade, Takeo

    2013-10-01

    Phase-contrast microscopy is one of the most common and convenient imaging modalities to observe long-term multi-cellular processes, which generates images by the interference of lights passing through transparent specimens and background medium with different retarded phases. Despite many years of study, computer-aided phase contrast microscopy analysis on cell behavior is challenged by image qualities and artifacts caused by phase contrast optics. Addressing the unsolved challenges, the authors propose (1) a phase contrast microscopy image restoration method that produces phase retardation features, which are intrinsic features of phase contrast microscopy, and (2) a semi-supervised learning based algorithm for cell segmentation, which is a fundamental task for various cell behavior analysis. Specifically, the image formation process of phase contrast microscopy images is first computationally modeled with a dictionary of diffraction patterns; as a result, each pixel of a phase contrast microscopy image is represented by a linear combination of the bases, which we call phase retardation features. Images are then partitioned into phase-homogeneous atoms by clustering neighboring pixels with similar phase retardation features. Consequently, cell segmentation is performed via a semi-supervised classification technique over the phase-homogeneous atoms. Experiments demonstrate that the proposed approach produces quality segmentation of individual cells and outperforms previous approaches. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Physiological sensor signals classification for healthcare using sensor data fusion and case-based reasoning.

    PubMed

    Begum, Shahina; Barua, Shaibal; Ahmed, Mobyen Uddin

    2014-07-03

    Today, clinicians often do diagnosis and classification of diseases based on information collected from several physiological sensor signals. However, sensor signal could easily be vulnerable to uncertain noises or interferences and due to large individual variations sensitivity to different physiological sensors could also vary. Therefore, multiple sensor signal fusion is valuable to provide more robust and reliable decision. This paper demonstrates a physiological sensor signal classification approach using sensor signal fusion and case-based reasoning. The proposed approach has been evaluated to classify Stressed or Relaxed individuals using sensor data fusion. Physiological sensor signals i.e., Heart Rate (HR), Finger Temperature (FT), Respiration Rate (RR), Carbon dioxide (CO2) and Oxygen Saturation (SpO2) are collected during the data collection phase. Here, sensor fusion has been done in two different ways: (i) decision-level fusion using features extracted through traditional approaches; and (ii) data-level fusion using features extracted by means of Multivariate Multiscale Entropy (MMSE). Case-Based Reasoning (CBR) is applied for the classification of the signals. The experimental result shows that the proposed system could classify Stressed or Relaxed individual 87.5% accurately compare to an expert in the domain. So, it shows promising result in the psychophysiological domain and could be possible to adapt this approach to other relevant healthcare systems.

  17. Ecological risk assessment of cheese whey effluents along a medium-sized river in southwest Greece.

    PubMed

    Karadima, Constantina; Theodoropoulos, Chris; Rouvalis, Angela; Iliopoulou-Georgudaki, Joan

    2010-01-01

    An ecological risk assessment of cheese whey effluents was applied in three critical sampling sites located in Vouraikos river (southwest Greece), while ecological classification using Water Framework Directive 2000/60/EU criteria allowed a direct comparison of toxicological and ecological data. Two invertebrates (Daphnia magna and Thamnocephalus platyurus) and the zebra fish Danio rerio were used for toxicological analyses, while the aquatic risk was calculated on the basis of the risk quotient (RQ = PEC/PNEC). Chemical classification of sites was carried out using the Nutrient Classification System, while benthic invertebrates were collected and analyzed for biological classification. Toxicological results revealed the heavy pollution load of the two sites, nearest to the point pollution source, as the PEC/PNEC ratio exceeded 1.0, while unexpectedly, no risk was detected for the most downstream site, due to the consequent interference of the riparian flora. These toxicological results were in agreement with the ecological analysis: the ecological quality of the two heavily impacted sites ranged from moderate to bad, whereas it was found good for the most downstream site. The results of the study indicate major ecological risk for almost 15 km downstream of the point pollution source and the potentiality of the water quality remediation by the riparian vegetation, proving the significance of its maintenance.

  18. Automated Decision Tree Classification of Corneal Shape

    PubMed Central

    Twa, Michael D.; Parthasarathy, Srinivasan; Roberts, Cynthia; Mahmoud, Ashraf M.; Raasch, Thomas W.; Bullimore, Mark A.

    2011-01-01

    Purpose The volume and complexity of data produced during videokeratography examinations present a challenge of interpretation. As a consequence, results are often analyzed qualitatively by subjective pattern recognition or reduced to comparisons of summary indices. We describe the application of decision tree induction, an automated machine learning classification method, to discriminate between normal and keratoconic corneal shapes in an objective and quantitative way. We then compared this method with other known classification methods. Methods The corneal surface was modeled with a seventh-order Zernike polynomial for 132 normal eyes of 92 subjects and 112 eyes of 71 subjects diagnosed with keratoconus. A decision tree classifier was induced using the C4.5 algorithm, and its classification performance was compared with the modified Rabinowitz–McDonnell index, Schwiegerling’s Z3 index (Z3), Keratoconus Prediction Index (KPI), KISA%, and Cone Location and Magnitude Index using recommended classification thresholds for each method. We also evaluated the area under the receiver operator characteristic (ROC) curve for each classification method. Results Our decision tree classifier performed equal to or better than the other classifiers tested: accuracy was 92% and the area under the ROC curve was 0.97. Our decision tree classifier reduced the information needed to distinguish between normal and keratoconus eyes using four of 36 Zernike polynomial coefficients. The four surface features selected as classification attributes by the decision tree method were inferior elevation, greater sagittal depth, oblique toricity, and trefoil. Conclusions Automated decision tree classification of corneal shape through Zernike polynomials is an accurate quantitative method of classification that is interpretable and can be generated from any instrument platform capable of raw elevation data output. This method of pattern classification is extendable to other classification problems. PMID:16357645

  19. Wall Interference in Two-Dimensional Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Kemp, William B., Jr.

    1986-01-01

    Viscosity and tunnel-wall constraints introduced via boundary conditions. TWINTN4 computer program developed to implement method of posttest assessment of wall interference in two-dimensional wind tunnels. Offers two methods for combining sidewall boundary-layer effects with upper and lower wall interference. In sequential procedure, Sewall method used to define flow free of sidewall effects, then assessed for upper and lower wall effects. In unified procedure, wind-tunnel flow equations altered to incorporate effects from all four walls at once. Program written in FORTRAN IV for batch execution.

  20. Combining DCQGMP-Based Sparse Decomposition and MPDR Beamformer for Multi-Type Interferences Mitigation for GNSS Receivers.

    PubMed

    Guo, Qiang; Qi, Liangang

    2017-04-10

    In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal.

  1. Combining DCQGMP-Based Sparse Decomposition and MPDR Beamformer for Multi-Type Interferences Mitigation for GNSS Receivers

    PubMed Central

    Guo, Qiang; Qi, Liangang

    2017-01-01

    In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal. PMID:28394290

  2. Urban Image Classification: Per-Pixel Classifiers, Sub-Pixel Analysis, Object-Based Image Analysis, and Geospatial Methods. 10; Chapter

    NASA Technical Reports Server (NTRS)

    Myint, Soe W.; Mesev, Victor; Quattrochi, Dale; Wentz, Elizabeth A.

    2013-01-01

    Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post-classification steps. Within this chapter, each of the four approaches is described in terms of scale and accuracy classifying urban land use and urban land cover; and for its range of urban applications. We demonstrate the overview of four main classification groups in Figure 1 while Table 1 details the approaches with respect to classification requirements and procedures (e.g., reflectance conversion, steps before training sample selection, training samples, spatial approaches commonly used, classifiers, primary inputs for classification, output structures, number of output layers, and accuracy assessment). The chapter concludes with a brief summary of the methods reviewed and the challenges that remain in developing new classification methods for improving the efficiency and accuracy of mapping urban areas.

  3. [Study of near infrared spectral preprocessing and wavelength selection methods for endometrial cancer tissue].

    PubMed

    Zhao, Li-Ting; Xiang, Yu-Hong; Dai, Yin-Mei; Zhang, Zhuo-Yong

    2010-04-01

    Near infrared spectroscopy was applied to measure the tissue slice of endometrial tissues for collecting the spectra. A total of 154 spectra were obtained from 154 samples. The number of normal, hyperplasia, and malignant samples was 36, 60, and 58, respectively. Original near infrared spectra are composed of many variables, for example, interference information including instrument errors and physical effects such as particle size and light scatter. In order to reduce these influences, original spectra data should be performed with different spectral preprocessing methods to compress variables and extract useful information. So the methods of spectral preprocessing and wavelength selection have played an important role in near infrared spectroscopy technique. In the present paper the raw spectra were processed using various preprocessing methods including first derivative, multiplication scatter correction, Savitzky-Golay first derivative algorithm, standard normal variate, smoothing, and moving-window median. Standard deviation was used to select the optimal spectral region of 4 000-6 000 cm(-1). Then principal component analysis was used for classification. Principal component analysis results showed that three types of samples could be discriminated completely and the accuracy almost achieved 100%. This study demonstrated that near infrared spectroscopy technology and chemometrics method could be a fast, efficient, and novel means to diagnose cancer. The proposed methods would be a promising and significant diagnosis technique of early stage cancer.

  4. Effective classification of the prevalence of Schistosoma mansoni.

    PubMed

    Mitchell, Shira A; Pagano, Marcello

    2012-12-01

    To present an effective classification method based on the prevalence of Schistosoma mansoni in the community. We created decision rules (defined by cut-offs for number of positive slides), which account for imperfect sensitivity, both with a simple adjustment of fixed sensitivity and with a more complex adjustment of changing sensitivity with prevalence. To reduce screening costs while maintaining accuracy, we propose a pooled classification method. To estimate sensitivity, we use the De Vlas model for worm and egg distributions. We compare the proposed method with the standard method to investigate differences in efficiency, measured by number of slides read, and accuracy, measured by probability of correct classification. Modelling varying sensitivity lowers the lower cut-off more significantly than the upper cut-off, correctly classifying regions as moderate rather than lower, thus receiving life-saving treatment. The classification method goes directly to classification on the basis of positive pools, avoiding having to know sensitivity to estimate prevalence. For model parameter values describing worm and egg distributions among children, the pooled method with 25 slides achieves an expected 89.9% probability of correct classification, whereas the standard method with 50 slides achieves 88.7%. Among children, it is more efficient and more accurate to use the pooled method for classification of S. mansoni prevalence than the current standard method. © 2012 Blackwell Publishing Ltd.

  5. Detection of cracks on concrete surfaces by hyperspectral image processing

    NASA Astrophysics Data System (ADS)

    Santos, Bruno O.; Valença, Jonatas; Júlio, Eduardo

    2017-06-01

    All large infrastructures worldwide must have a suitable monitoring and maintenance plan, aiming to evaluate their behaviour and predict timely interventions. In the particular case of concrete infrastructures, the detection and characterization of crack patterns is a major indicator of their structural response. In this scope, methods based on image processing have been applied and presented. Usually, methods focus on image binarization followed by applications of mathematical morphology to identify cracks on concrete surface. In most cases, publications are focused on restricted areas of concrete surfaces and in a single crack. On-site, the methods and algorithms have to deal with several factors that interfere with the results, namely dirt and biological colonization. Thus, the automation of a procedure for on-site characterization of crack patterns is of great interest. This advance may result in an effective tool to support maintenance strategies and interventions planning. This paper presents a research based on the analysis and processing of hyper-spectral images for detection and classification of cracks on concrete structures. The objective of the study is to evaluate the applicability of several wavelengths of the electromagnetic spectrum for classification of cracks in concrete surfaces. An image survey considering highly discretized wavelengths between 425 nm and 950 nm was performed on concrete specimens, with bandwidths of 25 nm. The concrete specimens were produced with a crack pattern induced by applying a load with displacement control. The tests were conducted to simulate usual on-site drawbacks. In this context, the surface of the specimen was subjected to biological colonization (leaves and moss). To evaluate the results and enhance crack patterns a clustering method, namely k-means algorithm, is being applied. The research conducted allows to define the suitability of using clustering k-means algorithm combined with hyper-spectral images highly discretized for crack detection on concrete surfaces, considering cracking combined with the most usual concrete anomalies, namely biological colonization.

  6. The phase interrogation method for optical fiber sensor by analyzing the fork interference pattern

    NASA Astrophysics Data System (ADS)

    Lv, Riqing; Qiu, Liqiang; Hu, Haifeng; Meng, Lu; Zhang, Yong

    2018-02-01

    The phase interrogation method for optical fiber sensor is proposed based on the fork interference pattern between the orbital angular momentum beam and plane wave. The variation of interference pattern with phase difference between the two light beams is investigated to realize the phase interrogation. By employing principal component analysis method, the features of the interference pattern can be extracted. Moreover, the experimental system is designed to verify the theoretical analysis, as well as feasibility of phase interrogation. In this work, the Mach-Zehnder interferometer was employed to convert the strain applied on sensing fiber to the phase difference between the reference and measuring paths. This interrogation method is also applicable for the measurements of other physical parameters, which can produce the phase delay in optical fiber. The performance of the system can be further improved by employing highlysensitive materials and fiber structures.

  7. Suppression of AC railway power-line interference in ECG signals recorded by public access defibrillators

    PubMed Central

    Dotsinsky, Ivan

    2005-01-01

    Background Public access defibrillators (PADs) are now available for more efficient and rapid treatment of out-of-hospital sudden cardiac arrest. PADs are used normally by untrained people on the streets and in sports centers, airports, and other public areas. Therefore, automated detection of ventricular fibrillation, or its exclusion, is of high importance. A special case exists at railway stations, where electric power-line frequency interference is significant. Many countries, especially in Europe, use 16.7 Hz AC power, which introduces high level frequency-varying interference that may compromise fibrillation detection. Method Moving signal averaging is often used for 50/60 Hz interference suppression if its effect on the ECG spectrum has little importance (no morphological analysis is performed). This approach may be also applied to the railway situation, if the interference frequency is continuously detected so as to synchronize the analog-to-digital conversion (ADC) for introducing variable inter-sample intervals. A better solution consists of rated ADC, software frequency measuring, internal irregular re-sampling according to the interference frequency, and a moving average over a constant sample number, followed by regular back re-sampling. Results The proposed method leads to a total railway interference cancellation, together with suppression of inherent noise, while the peak amplitudes of some sharp complexes are reduced. This reduction has negligible effect on accurate fibrillation detection. Conclusion The method is developed in the MATLAB environment and represents a useful tool for real time railway interference suppression. PMID:16309558

  8. Using an interference spectrum as a short-range absolute rangefinder with fiber and wideband source

    NASA Astrophysics Data System (ADS)

    Hsieh, Tsung-Han; Han, Pin

    2018-06-01

    Recently, a new type of displacement instrument using spectral-interference has been found, which utilizes fiber and a wideband light source to produce an interference spectrum. In this work, we develop a method that measures the absolute air-gap distance by taking wavelengths at two interference spectra minima. The experimental results agree with the theoretical calculations. It is also utilized to produce and control the spectral switch, which is much easier than other previous methods using other control mechanisms. A scanning mode of this scheme for stepped surface measurement is suggested, which is verified by a standard thickness gauge test. Our scheme is different to one available on the market that may use a curve-fitting method, and some comparisons are made between our scheme and that one.

  9. Plant species classification using flower images—A comparative study of local feature representations

    PubMed Central

    Seeland, Marco; Rzanny, Michael; Alaqraa, Nedal; Wäldchen, Jana; Mäder, Patrick

    2017-01-01

    Steady improvements of image description methods induced a growing interest in image-based plant species classification, a task vital to the study of biodiversity and ecological sensitivity. Various techniques have been proposed for general object classification over the past years and several of them have already been studied for plant species classification. However, results of these studies are selective in the evaluated steps of a classification pipeline, in the utilized datasets for evaluation, and in the compared baseline methods. No study is available that evaluates the main competing methods for building an image representation on the same datasets allowing for generalized findings regarding flower-based plant species classification. The aim of this paper is to comparatively evaluate methods, method combinations, and their parameters towards classification accuracy. The investigated methods span from detection, extraction, fusion, pooling, to encoding of local features for quantifying shape and color information of flower images. We selected the flower image datasets Oxford Flower 17 and Oxford Flower 102 as well as our own Jena Flower 30 dataset for our experiments. Findings show large differences among the various studied techniques and that their wisely chosen orchestration allows for high accuracies in species classification. We further found that true local feature detectors in combination with advanced encoding methods yield higher classification results at lower computational costs compared to commonly used dense sampling and spatial pooling methods. Color was found to be an indispensable feature for high classification results, especially while preserving spatial correspondence to gray-level features. In result, our study provides a comprehensive overview of competing techniques and the implications of their main parameters for flower-based plant species classification. PMID:28234999

  10. Topography of hidden objects using THz digital holography with multi-beam interferences.

    PubMed

    Valzania, Lorenzo; Zolliker, Peter; Hack, Erwin

    2017-05-15

    We present a method for the separation of the signal scattered from an object hidden behind a THz-transparent sample in the framework of THz digital holography in reflection. It combines three images of different interference patterns to retrieve the amplitude and phase distribution of the object beam. Comparison of simulated with experimental images obtained from a metallic resolution target behind a Teflon plate demonstrates that the interference patterns can be described in the simple form of three-beam interference. Holographic reconstructions after the application of the method show a considerable improvement compared to standard reconstructions exclusively based on Fourier transform phase retrieval.

  11. Voltage control on a train system

    DOEpatents

    Gordon, Susanna P.; Evans, John A.

    2004-01-20

    The present invention provides methods for preventing low train voltages and managing interference, thereby improving the efficiency, reliability, and passenger comfort associated with commuter trains. An algorithm implementing neural network technology is used to predict low voltages before they occur. Once voltages are predicted, then multiple trains can be controlled to prevent low voltage events. Further, algorithms for managing inference are presented in the present invention. Different types of interference problems are addressed in the present invention such as "Interference During Acceleration", "Interference Near Station Stops", and "Interference During Delay Recovery." Managing such interference avoids unnecessary brake/acceleration cycles during acceleration, immediately before station stops, and after substantial delays. Algorithms are demonstrated to avoid oscillatory brake/acceleration cycles due to interference and to smooth the trajectories of closely following trains. This is achieved by maintaining sufficient following distances to avoid unnecessary braking/accelerating. These methods generate smooth train trajectories, making for a more comfortable ride, and improve train motor reliability by avoiding unnecessary mode-changes between propulsion and braking. These algorithms can also have a favorable impact on traction power system requirements and energy consumption.

  12. Efficient high density train operations

    DOEpatents

    Gordon, Susanna P.; Evans, John A.

    2001-01-01

    The present invention provides methods for preventing low train voltages and managing interference, thereby improving the efficiency, reliability, and passenger comfort associated with commuter trains. An algorithm implementing neural network technology is used to predict low voltages before they occur. Once voltages are predicted, then multiple trains can be controlled to prevent low voltage events. Further, algorithms for managing inference are presented in the present invention. Different types of interference problems are addressed in the present invention such as "Interference. During Acceleration", "Interference Near Station Stops", and "Interference During Delay Recovery." Managing such interference avoids unnecessary brake/acceleration cycles during acceleration, immediately before station stops, and after substantial delays. Algorithms are demonstrated to avoid oscillatory brake/acceleration cycles due to interference and to smooth the trajectories of closely following trains. This is achieved by maintaining sufficient following distances to avoid unnecessary braking/accelerating. These methods generate smooth train trajectories, making for a more comfortable ride, and improve train motor reliability by avoiding unnecessary mode-changes between propulsion and braking. These algorithms can also have a favorable impact on traction power system requirements and energy consumption.

  13. Comparison of Feature Selection Techniques in Machine Learning for Anatomical Brain MRI in Dementia.

    PubMed

    Tohka, Jussi; Moradi, Elaheh; Huttunen, Heikki

    2016-07-01

    We present a comparative split-half resampling analysis of various data driven feature selection and classification methods for the whole brain voxel-based classification analysis of anatomical magnetic resonance images. We compared support vector machines (SVMs), with or without filter based feature selection, several embedded feature selection methods and stability selection. While comparisons of the accuracy of various classification methods have been reported previously, the variability of the out-of-training sample classification accuracy and the set of selected features due to independent training and test sets have not been previously addressed in a brain imaging context. We studied two classification problems: 1) Alzheimer's disease (AD) vs. normal control (NC) and 2) mild cognitive impairment (MCI) vs. NC classification. In AD vs. NC classification, the variability in the test accuracy due to the subject sample did not vary between different methods and exceeded the variability due to different classifiers. In MCI vs. NC classification, particularly with a large training set, embedded feature selection methods outperformed SVM-based ones with the difference in the test accuracy exceeding the test accuracy variability due to the subject sample. The filter and embedded methods produced divergent feature patterns for MCI vs. NC classification that suggests the utility of the embedded feature selection for this problem when linked with the good generalization performance. The stability of the feature sets was strongly correlated with the number of features selected, weakly correlated with the stability of classification accuracy, and uncorrelated with the average classification accuracy.

  14. New wideband radar target classification method based on neural learning and modified Euclidean metric

    NASA Astrophysics Data System (ADS)

    Jiang, Yicheng; Cheng, Ping; Ou, Yangkui

    2001-09-01

    A new method for target classification of high-range resolution radar is proposed. It tries to use neural learning to obtain invariant subclass features of training range profiles. A modified Euclidean metric based on the Box-Cox transformation technique is investigated for Nearest Neighbor target classification improvement. The classification experiments using real radar data of three different aircraft have demonstrated that classification error can reduce 8% if this method proposed in this paper is chosen instead of the conventional method. The results of this paper have shown that by choosing an optimized metric, it is indeed possible to reduce the classification error without increasing the number of samples.

  15. New method for the direct determination of dissolved Fe(III) concentration in acid mine waters

    USGS Publications Warehouse

    To, T.B.; Nordstrom, D. Kirk; Cunningham, K.M.; Ball, J.W.; McCleskey, R. Blaine

    1999-01-01

    A new method for direct determination of dissolved Fe(III) in acid mine water has been developed. In most present methods, Fe(III) is determined by computing the difference between total dissolved Fe and dissolved Fe(II). For acid mine waters, frequently Fe(II) >> Fe(III); thus, accuracy and precision are considerably improved by determining Fe(III) concentration directly. The new method utilizes two selective ligands to stabilize Fe(III) and Fe(II), thereby preventing changes in Fe reduction-oxidation distribution. Complexed Fe(II) is cleanly removed using a silica-based, reversed-phase adsorbent, yielding excellent isolation of the Fe(III) complex. Iron(III) concentration is measured colorimetrically or by graphite furnace atomic absorption spectrometry (GFAAS). The method requires inexpensive commercial reagents and simple procedures that can be used in the field. Calcium(II), Ni(II), Pb(II), AI(III), Zn(II), and Cd(II) cause insignificant colorimetric interferences for most acid mine waters. Waters containing >20 mg of Cu/L could cause a colorimetric interference and should be measured by GFAAS. Cobalt(II) and Cr(III) interfere if their molar ratios to Fe(III) exceed 24 and 5, respectively. Iron(II) interferes when its concentration exceeds the capacity of the complexing ligand (14 mg/L). Because of the GFAAS elemental specificity, only Fe(II) is a potential interferent in the GFAAS technique. The method detection limit is 2 ??g/L (40 nM) using GFAAS and 20 ??g/L (0.4 ??M) by colorimetry.A new method for direct determination of dissolved Fe(III) in acid mine water has been developed. In most present methods, Fe(III) is determined by computing the difference between total dissolved Fe and dissolved Fe(II). For acid mine waters, frequently Fe(II)???Fe(III); thus, accuracy and precision are considerably improved by determining Fe(III) concentration directly. The new method utilizes two selective ligands to stabilize Fe(III) and Fe(II), thereby preventing changes in Fe reduction-oxidation distribution. Complexed Fe(II) is cleanly removed using a silica-based, reversed-phase adsorbent, yielding excellent isolation of the Fe(III) complex. Iron(III) concentration is measured colorimetrically or by graphite furnace atomic absorption spectrometry (GFAAS). The method requires inexpensive commercial reagents and simple procedures that can be used in the field. Calcium(II), Ni(II), Pb(II), Al(III), Zn(II), and Cd(II) cause insignificant colorimetric interferences for most acid mine waters. Waters containing >20 mg of Cu/L could cause a colorimetric interference and should be measured by GFAAS. Cobalt(II) and Cr(III) interfere if their molar ratios to Fe(III) exceed 24 and 5, respectively. Iron(II) interferes when its concentration exceeds the capacity of the complexing ligand (14 mg/L). Because of the GFAAS elemental specificity, only Fe(II) is a potential interferent in the GFAAS technique. The method detection limit is 2/??g/L (40 nM) using GFAAS and 20 ??g/L (0.4 ??M) by colorimetry.

  16. BIOTIN INTERFERENCE WITH ROUTINE CLINICAL IMMUNOASSAYS: UNDERSTAND THE CAUSES AND MITIGATE THE RISKS.

    PubMed

    Samarasinghe, Shanika; Meah, Farah; Singh, Vinita; Basit, Arshi; Emanuele, Nicholas; Emanuele, Mary Ann; Mazhari, Alaleh; Holmes, Earle W

    2017-08-01

    The objectives of this report are to review the mechanisms of biotin interference with streptavidin/biotin-based immunoassays, identify automated immunoassay systems vulnerable to biotin interference, describe how to estimate and minimize the risk of biotin interference in vulnerable assays, and review the literature pertaining to biotin interference in endocrine function tests. The data in the manufacturer's "Instructions for Use" for each of the methods utilized by seven immunoassay system were evaluated. We also conducted a systematic search of PubMed/MEDLINE for articles containing terms associated with biotin interference. Available original reports and case series were reviewed. Abstracts from recent scientific meetings were also identified and reviewed. The recent, marked, increase in the use of over-the-counter, high-dose biotin supplements has been accompanied by a steady increase in the number of reports of analytical interference by exogenous biotin in the immunoassays used to evaluate endocrine function. Since immunoassay methods of similar design are also used for the diagnosis and management of anemia, malignancies, autoimmune and infectious diseases, cardiac damage, etc., biotin-related analytical interference is a problem that touches every area of internal medicine. It is important for healthcare personnel to become more aware of immunoassay methods that are vulnerable to biotin interference and to consider biotin supplements as potential sources of falsely increased or decreased test results, especially in cases where a lab result does not correlate with the clinical scenario. FDA = U.S. Food & Drug Administration FT3 = free tri-iodothyronine FT4 = free thyroxine IFUs = instructions for use LH = luteinizing hormone PTH = parathyroid hormone SA/B = streptavidin/biotin TFT = thyroid function test TSH = thyroid-stimulating hormone.

  17. A Method for Dynamically Selecting the Best Frequency Hopping Technique in Industrial Wireless Sensor Network Applications

    PubMed Central

    Fernández de Gorostiza, Erlantz; Mabe, Jon

    2018-01-01

    Industrial wireless applications often share the communication channel with other wireless technologies and communication protocols. This coexistence produces interferences and transmission errors which require appropriate mechanisms to manage retransmissions. Nevertheless, these mechanisms increase the network latency and overhead due to the retransmissions. Thus, the loss of data packets and the measures to handle them produce an undesirable drop in the QoS and hinder the overall robustness and energy efficiency of the network. Interference avoidance mechanisms, such as frequency hopping techniques, reduce the need for retransmissions due to interferences but they are often tailored to specific scenarios and are not easily adapted to other use cases. On the other hand, the total absence of interference avoidance mechanisms introduces a security risk because the communication channel may be intentionally attacked and interfered with to hinder or totally block it. In this paper we propose a method for supporting the design of communication solutions under dynamic channel interference conditions and we implement dynamic management policies for frequency hopping technique and channel selection at runtime. The method considers several standard frequency hopping techniques and quality metrics, and the quality and status of the available frequency channels to propose the best combined solution to minimize the side effects of interferences. A simulation tool has been developed and used in this work to validate the method. PMID:29473910

  18. A Method for Dynamically Selecting the Best Frequency Hopping Technique in Industrial Wireless Sensor Network Applications.

    PubMed

    Fernández de Gorostiza, Erlantz; Berzosa, Jorge; Mabe, Jon; Cortiñas, Roberto

    2018-02-23

    Industrial wireless applications often share the communication channel with other wireless technologies and communication protocols. This coexistence produces interferences and transmission errors which require appropriate mechanisms to manage retransmissions. Nevertheless, these mechanisms increase the network latency and overhead due to the retransmissions. Thus, the loss of data packets and the measures to handle them produce an undesirable drop in the QoS and hinder the overall robustness and energy efficiency of the network. Interference avoidance mechanisms, such as frequency hopping techniques, reduce the need for retransmissions due to interferences but they are often tailored to specific scenarios and are not easily adapted to other use cases. On the other hand, the total absence of interference avoidance mechanisms introduces a security risk because the communication channel may be intentionally attacked and interfered with to hinder or totally block it. In this paper we propose a method for supporting the design of communication solutions under dynamic channel interference conditions and we implement dynamic management policies for frequency hopping technique and channel selection at runtime. The method considers several standard frequency hopping techniques and quality metrics, and the quality and status of the available frequency channels to propose the best combined solution to minimize the side effects of interferences. A simulation tool has been developed and used in this work to validate the method.

  19. Near-Field Noise Source Localization in the Presence of Interference

    NASA Astrophysics Data System (ADS)

    Liang, Guolong; Han, Bo

    In order to suppress the influence of interference sources on the noise source localization in the near field, the near-field broadband source localization in the presence of interference is studied. Oblique projection is constructed with the array measurements and the steering manifold of interference sources, which is used to filter the interference signals out. 2D-MUSIC algorithm is utilized to deal with the data in each frequency, and then the results of each frequency are averaged to achieve the positioning of the broadband noise sources. The simulations show that this method suppresses the interference sources effectively and is capable of locating the source which is in the same direction with the interference source.

  20. A Bayesian taxonomic classification method for 16S rRNA gene sequences with improved species-level accuracy.

    PubMed

    Gao, Xiang; Lin, Huaiying; Revanna, Kashi; Dong, Qunfeng

    2017-05-10

    Species-level classification for 16S rRNA gene sequences remains a serious challenge for microbiome researchers, because existing taxonomic classification tools for 16S rRNA gene sequences either do not provide species-level classification, or their classification results are unreliable. The unreliable results are due to the limitations in the existing methods which either lack solid probabilistic-based criteria to evaluate the confidence of their taxonomic assignments, or use nucleotide k-mer frequency as the proxy for sequence similarity measurement. We have developed a method that shows significantly improved species-level classification results over existing methods. Our method calculates true sequence similarity between query sequences and database hits using pairwise sequence alignment. Taxonomic classifications are assigned from the species to the phylum levels based on the lowest common ancestors of multiple database hits for each query sequence, and further classification reliabilities are evaluated by bootstrap confidence scores. The novelty of our method is that the contribution of each database hit to the taxonomic assignment of the query sequence is weighted by a Bayesian posterior probability based upon the degree of sequence similarity of the database hit to the query sequence. Our method does not need any training datasets specific for different taxonomic groups. Instead only a reference database is required for aligning to the query sequences, making our method easily applicable for different regions of the 16S rRNA gene or other phylogenetic marker genes. Reliable species-level classification for 16S rRNA or other phylogenetic marker genes is critical for microbiome research. Our software shows significantly higher classification accuracy than the existing tools and we provide probabilistic-based confidence scores to evaluate the reliability of our taxonomic classification assignments based on multiple database matches to query sequences. Despite its higher computational costs, our method is still suitable for analyzing large-scale microbiome datasets for practical purposes. Furthermore, our method can be applied for taxonomic classification of any phylogenetic marker gene sequences. Our software, called BLCA, is freely available at https://github.com/qunfengdong/BLCA .

  1. Comparing K-mer based methods for improved classification of 16S sequences.

    PubMed

    Vinje, Hilde; Liland, Kristian Hovde; Almøy, Trygve; Snipen, Lars

    2015-07-01

    The need for precise and stable taxonomic classification is highly relevant in modern microbiology. Parallel to the explosion in the amount of sequence data accessible, there has also been a shift in focus for classification methods. Previously, alignment-based methods were the most applicable tools. Now, methods based on counting K-mers by sliding windows are the most interesting classification approach with respect to both speed and accuracy. Here, we present a systematic comparison on five different K-mer based classification methods for the 16S rRNA gene. The methods differ from each other both in data usage and modelling strategies. We have based our study on the commonly known and well-used naïve Bayes classifier from the RDP project, and four other methods were implemented and tested on two different data sets, on full-length sequences as well as fragments of typical read-length. The difference in classification error obtained by the methods seemed to be small, but they were stable and for both data sets tested. The Preprocessed nearest-neighbour (PLSNN) method performed best for full-length 16S rRNA sequences, significantly better than the naïve Bayes RDP method. On fragmented sequences the naïve Bayes Multinomial method performed best, significantly better than all other methods. For both data sets explored, and on both full-length and fragmented sequences, all the five methods reached an error-plateau. We conclude that no K-mer based method is universally best for classifying both full-length sequences and fragments (reads). All methods approach an error plateau indicating improved training data is needed to improve classification from here. Classification errors occur most frequent for genera with few sequences present. For improving the taxonomy and testing new classification methods, the need for a better and more universal and robust training data set is crucial.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Jia; Zhang, Ziang; Weng, Zhankun

    This paper presents a new method for the generation of cross-scale laser interference patterns and the fabrication of moth-eye structures on silicon. In the method, moth-eye structures were produced on a surface of silicon wafer using direct six-beam laser interference lithography to improve the antireflection performance of the material surface. The periodic dot arrays of the moth-eye structures were formed due to the ablation of the irradiance distribution of interference patterns on the wafer surface. The shape, size, and distribution of the moth-eye structures can be adjusted by controlling the wavelength, incidence angles, and exposure doses in a direct six-beammore » laser interference lithography setup. The theoretical and experimental results have shown that direct six-beam laser interference lithography can provide a way to fabricate cross-scale moth-eye structures for antireflection applications.« less

  3. Some new classification methods for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia

    2006-10-01

    Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.

  4. Evaluation of different distortion correction methods and interpolation techniques for an automated classification of celiac disease☆

    PubMed Central

    Gadermayr, M.; Liedlgruber, M.; Uhl, A.; Vécsei, A.

    2013-01-01

    Due to the optics used in endoscopes, a typical degradation observed in endoscopic images are barrel-type distortions. In this work we investigate the impact of methods used to correct such distortions in images on the classification accuracy in the context of automated celiac disease classification. For this purpose we compare various different distortion correction methods and apply them to endoscopic images, which are subsequently classified. Since the interpolation used in such methods is also assumed to have an influence on the resulting classification accuracies, we also investigate different interpolation methods and their impact on the classification performance. In order to be able to make solid statements about the benefit of distortion correction we use various different feature extraction methods used to obtain features for the classification. Our experiments show that it is not possible to make a clear statement about the usefulness of distortion correction methods in the context of an automated diagnosis of celiac disease. This is mainly due to the fact that an eventual benefit of distortion correction highly depends on the feature extraction method used for the classification. PMID:23981585

  5. Measurement of the configuration of a concave surface by the interference of reflected light

    NASA Technical Reports Server (NTRS)

    Kumazawa, T.; Sakamoto, T.; Shida, S.

    1985-01-01

    A method whereby a concave surface is irradiated with coherent light and the resulting interference fringes yield information on the concave surface is described. This method can be applied to a surface which satisfies the following conditions: (1) the concave face has a mirror surface; (2) the profile of the face is expressed by a mathematical function with a point of inflection. In this interferometry, multilight waves reflected from the concave surface interfere and make fringes wherever the reflected light propagates. Interference fringe orders. Photographs of the fringe patterns for a uniformly loaded thin silicon plate clamped at the edge are shown experimentally. The experimental and the theoretical values of the maximum optical path difference show good agreement. This simple method can be applied to obtain accurate information on concave surfaces.

  6. Consensus Classification Using Non-Optimized Classifiers.

    PubMed

    Brownfield, Brett; Lemos, Tony; Kalivas, John H

    2018-04-03

    Classifying samples into categories is a common problem in analytical chemistry and other fields. Classification is usually based on only one method, but numerous classifiers are available with some being complex, such as neural networks, and others are simple, such as k nearest neighbors. Regardless, most classification schemes require optimization of one or more tuning parameters for best classification accuracy, sensitivity, and specificity. A process not requiring exact selection of tuning parameter values would be useful. To improve classification, several ensemble approaches have been used in past work to combine classification results from multiple optimized single classifiers. The collection of classifications for a particular sample are then combined by a fusion process such as majority vote to form the final classification. Presented in this Article is a method to classify a sample by combining multiple classification methods without specifically classifying the sample by each method, that is, the classification methods are not optimized. The approach is demonstrated on three analytical data sets. The first is a beer authentication set with samples measured on five instruments, allowing fusion of multiple instruments by three ways. The second data set is composed of textile samples from three classes based on Raman spectra. This data set is used to demonstrate the ability to classify simultaneously with different data preprocessing strategies, thereby reducing the need to determine the ideal preprocessing method, a common prerequisite for accurate classification. The third data set contains three wine cultivars for three classes measured at 13 unique chemical and physical variables. In all cases, fusion of nonoptimized classifiers improves classification. Also presented are atypical uses of Procrustes analysis and extended inverted signal correction (EISC) for distinguishing sample similarities to respective classes.

  7. Study on Ecological Risk Assessment of Guangxi Coastal Zone Based on 3s Technology

    NASA Astrophysics Data System (ADS)

    Zhong, Z.; Luo, H.; Ling, Z. Y.; Huang, Y.; Ning, W. Y.; Tang, Y. B.; Shao, G. Z.

    2018-05-01

    This paper takes Guangxi coastal zone as the study area, following the standards of land use type, divides the coastal zone of ecological landscape into seven kinds of natural wetland landscape types such as woodland, farmland, grassland, water, urban land and wetlands. Using TM data of 2000-2015 such 15 years, with the CART decision tree algorithm, for analysis the characteristic of types of landscape's remote sensing image and build decision tree rules of landscape classification to extract information classification. Analyzing of the evolution process of the landscape pattern in Guangxi coastal zone in nearly 15 years, we may understand the distribution characteristics and change rules. Combined with the natural disaster data, we use of landscape index and the related risk interference degree and construct ecological risk evaluation model in Guangxi coastal zone for ecological risk assessment results of Guangxi coastal zone.

  8. Optimal weighted averaging of event related activity from acquisitions with artifacts.

    PubMed

    Vollero, Luca; Petrichella, Sara; Innello, Giulio

    2016-08-01

    In several biomedical applications that require the signal processing of biological data, the starting procedure for noise reduction is the ensemble averaging of multiple repeated acquisitions (trials). This method is based on the assumption that each trial is composed of two additive components: (i) a time-locked activity related to some sensitive/stimulation phenomenon (ERA, Event Related Activity in the following) and (ii) a sum of several other non time-locked background activities. The averaging aims at estimating the ERA activity under very low Signal to Noise and Interference Ratio (SNIR). Although averaging is a well established tool, its performance can be improved in the presence of high-power disturbances (artifacts) by a trials classification and removal stage. In this paper we propose, model and evaluate a new approach that avoids trials removal, managing trials classified as artifact-free and artifact-prone with two different weights. Based on the model, a weights tuning is possible and through modeling and simulations we show that, when optimally configured, the proposed solution outperforms classical approaches.

  9. High-throughput, label-free, single-cell, microalgal lipid screening by machine-learning-equipped optofluidic time-stretch quantitative phase microscopy.

    PubMed

    Guo, Baoshan; Lei, Cheng; Kobayashi, Hirofumi; Ito, Takuro; Yalikun, Yaxiaer; Jiang, Yiyue; Tanaka, Yo; Ozeki, Yasuyuki; Goda, Keisuke

    2017-05-01

    The development of reliable, sustainable, and economical sources of alternative fuels to petroleum is required to tackle the global energy crisis. One such alternative is microalgal biofuel, which is expected to play a key role in reducing the detrimental effects of global warming as microalgae absorb atmospheric CO 2 via photosynthesis. Unfortunately, conventional analytical methods only provide population-averaged lipid amounts and fail to characterize a diverse population of microalgal cells with single-cell resolution in a non-invasive and interference-free manner. Here high-throughput label-free single-cell screening of lipid-producing microalgal cells with optofluidic time-stretch quantitative phase microscopy was demonstrated. In particular, Euglena gracilis, an attractive microalgal species that produces wax esters (suitable for biodiesel and aviation fuel after refinement), within lipid droplets was investigated. The optofluidic time-stretch quantitative phase microscope is based on an integration of a hydrodynamic-focusing microfluidic chip, an optical time-stretch quantitative phase microscope, and a digital image processor equipped with machine learning. As a result, it provides both the opacity and phase maps of every single cell at a high throughput of 10,000 cells/s, enabling accurate cell classification without the need for fluorescent staining. Specifically, the dataset was used to characterize heterogeneous populations of E. gracilis cells under two different culture conditions (nitrogen-sufficient and nitrogen-deficient) and achieve the cell classification with an error rate of only 2.15%. The method holds promise as an effective analytical tool for microalgae-based biofuel production. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  10. Thin-film thickness measurement method based on the reflection interference spectrum

    NASA Astrophysics Data System (ADS)

    Jiang, Li Na; Feng, Gao; Shu, Zhang

    2012-09-01

    A method is introduced to measure the thin-film thickness, refractive index and other optical constants. When a beam of white light shines on the surface of the sample film, the reflected lights of the upper and the lower surface of the thin-film will interfere with each other and reflectivity of the film will fluctuate with light wavelength. The reflection interference spectrum is analyzed with software according to the database, while the thickness and refractive index of the thin-film is measured.

  11. Interferometric weak measurement of photon polarization

    NASA Astrophysics Data System (ADS)

    Iinuma, Masataka; Suzuki, Yutaro; Taguchi, Gen; Kadoya, Yutaka; Hofmann, Holger F.

    2011-10-01

    We realize a minimum back-action quantum non-demolition measurement of variable strength on photon polarization in the diagonal(PM) basis by two-mode path interference. This method uses the phase difference between the positive (P) and negative (M) superpositions in the interference between the horizontal (H) and vertical (V) polarized paths in the input beam. Although the interference can not occur when the H and V polarizations are distinguishable, a well-controlled amount of interference is induced by erasing the H and V information using a coherent rotation of polarization toward a common diagonal polarization. This method is particularly suitable for the realization of weak measurements, where the control of the back-action is essential.

  12. A Method of Spatial Mapping and Reclassification for High-Spatial-Resolution Remote Sensing Image Classification

    PubMed Central

    Wang, Guizhou; Liu, Jianbo; He, Guojin

    2013-01-01

    This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine). Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy. PMID:24453808

  13. Classification of single-trial auditory events using dry-wireless EEG during real and motion simulated flight.

    PubMed

    Callan, Daniel E; Durantin, Gautier; Terzibas, Cengiz

    2015-01-01

    Application of neuro-augmentation technology based on dry-wireless EEG may be considerably beneficial for aviation and space operations because of the inherent dangers involved. In this study we evaluate classification performance of perceptual events using a dry-wireless EEG system during motion platform based flight simulation and actual flight in an open cockpit biplane to determine if the system can be used in the presence of considerable environmental and physiological artifacts. A passive task involving 200 random auditory presentations of a chirp sound was used for evaluation. The advantage of this auditory task is that it does not interfere with the perceptual motor processes involved with piloting the plane. Classification was based on identifying the presentation of a chirp sound vs. silent periods. Evaluation of Independent component analysis (ICA) and Kalman filtering to enhance classification performance by extracting brain activity related to the auditory event from other non-task related brain activity and artifacts was assessed. The results of permutation testing revealed that single trial classification of presence or absence of an auditory event was significantly above chance for all conditions on a novel test set. The best performance could be achieved with both ICA and Kalman filtering relative to no processing: Platform Off (83.4% vs. 78.3%), Platform On (73.1% vs. 71.6%), Biplane Engine Off (81.1% vs. 77.4%), and Biplane Engine On (79.2% vs. 66.1%). This experiment demonstrates that dry-wireless EEG can be used in environments with considerable vibration, wind, acoustic noise, and physiological artifacts and achieve good single trial classification performance that is necessary for future successful application of neuro-augmentation technology based on brain-machine interfaces.

  14. Feature selection and classification of multiparametric medical images using bagging and SVM

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Resnick, Susan M.; Davatzikos, Christos

    2008-03-01

    This paper presents a framework for brain classification based on multi-parametric medical images. This method takes advantage of multi-parametric imaging to provide a set of discriminative features for classifier construction by using a regional feature extraction method which takes into account joint correlations among different image parameters; in the experiments herein, MRI and PET images of the brain are used. Support vector machine classifiers are then trained based on the most discriminative features selected from the feature set. To facilitate robust classification and optimal selection of parameters involved in classification, in view of the well-known "curse of dimensionality", base classifiers are constructed in a bagging (bootstrap aggregating) framework for building an ensemble classifier and the classification parameters of these base classifiers are optimized by means of maximizing the area under the ROC (receiver operating characteristic) curve estimated from their prediction performance on left-out samples of bootstrap sampling. This classification system is tested on a sex classification problem, where it yields over 90% classification rates for unseen subjects. The proposed classification method is also compared with other commonly used classification algorithms, with favorable results. These results illustrate that the methods built upon information jointly extracted from multi-parametric images have the potential to perform individual classification with high sensitivity and specificity.

  15. A Classification of Remote Sensing Image Based on Improved Compound Kernels of Svm

    NASA Astrophysics Data System (ADS)

    Zhao, Jianing; Gao, Wanlin; Liu, Zili; Mou, Guifen; Lu, Lin; Yu, Lina

    The accuracy of RS classification based on SVM which is developed from statistical learning theory is high under small number of train samples, which results in satisfaction of classification on RS using SVM methods. The traditional RS classification method combines visual interpretation with computer classification. The accuracy of the RS classification, however, is improved a lot based on SVM method, because it saves much labor and time which is used to interpret images and collect training samples. Kernel functions play an important part in the SVM algorithm. It uses improved compound kernel function and therefore has a higher accuracy of classification on RS images. Moreover, compound kernel improves the generalization and learning ability of the kernel.

  16. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  17. Neural network approaches versus statistical methods in classification of multisource remote sensing data

    NASA Technical Reports Server (NTRS)

    Benediktsson, Jon A.; Swain, Philip H.; Ersoy, Okan K.

    1990-01-01

    Neural network learning procedures and statistical classificaiton methods are applied and compared empirically in classification of multisource remote sensing and geographic data. Statistical multisource classification by means of a method based on Bayesian classification theory is also investigated and modified. The modifications permit control of the influence of the data sources involved in the classification process. Reliability measures are introduced to rank the quality of the data sources. The data sources are then weighted according to these rankings in the statistical multisource classification. Four data sources are used in experiments: Landsat MSS data and three forms of topographic data (elevation, slope, and aspect). Experimental results show that two different approaches have unique advantages and disadvantages in this classification application.

  18. Multi-task linear programming discriminant analysis for the identification of progressive MCI individuals.

    PubMed

    Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang

    2014-01-01

    Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method.

  19. Multi-Task Linear Programming Discriminant Analysis for the Identification of Progressive MCI Individuals

    PubMed Central

    Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang

    2014-01-01

    Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method. PMID:24820966

  20. Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)

    DTIC Science & Technology

    2016-05-01

    case of cognitive radio applications. Modulation classification is part of a broader problem known as blind or uncooperative demodulation the goal of...Introduction 2 2.1 Modulation Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.2 Research Objectives...6 3 Modulation Classification Methods 7 3.0.1 Ad Hoc

  1. General expressions for downlink signal to interference and noise ratio in homogeneous and heterogeneous LTE-Advanced networks.

    PubMed

    Ali, Nora A; Mourad, Hebat-Allah M; ElSayed, Hany M; El-Soudani, Magdy; Amer, Hassanein H; Daoud, Ramez M

    2016-11-01

    The interference is the most important problem in LTE or LTE-Advanced networks. In this paper, the interference was investigated in terms of the downlink signal to interference and noise ratio (SINR). In order to compare the different frequency reuse methods that were developed to enhance the SINR, it would be helpful to have a generalized expression to study the performance of the different methods. Therefore, this paper introduces general expressions for the SINR in homogeneous and in heterogeneous networks. In homogeneous networks, the expression was applied for the most common types of frequency reuse techniques: soft frequency reuse (SFR) and fractional frequency reuse (FFR). The expression was examined by comparing it with previously developed ones in the literature and the comparison showed that the expression is valid for any type of frequency reuse scheme and any network topology. Furthermore, the expression was extended to include the heterogeneous network; the expression includes the problem of co-tier and cross-tier interference in heterogeneous networks (HetNet) and it was examined by the same method of the homogeneous one.

  2. Assessment and control of spacecraft electromagnetic interference

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Design criteria are presented to provide guidance in assessing electromagnetic interference from onboard sources and establishing requisite control in spacecraft design, development, and testing. A comprehensive state-of-the-art review is given which covers flight experience, sources and transmission of electromagnetic interference, susceptible equipment, design procedure, control techniques, and test methods.

  3. Application of the correlation constrained multivariate curve resolution alternating least-squares method for analyte quantitation in the presence of unexpected interferences using first-order instrumental data.

    PubMed

    Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà

    2010-03-01

    Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.

  4. An information-based network approach for protein classification

    PubMed Central

    Wan, Xiaogeng; Zhao, Xin; Yau, Stephen S. T.

    2017-01-01

    Protein classification is one of the critical problems in bioinformatics. Early studies used geometric distances and polygenetic-tree to classify proteins. These methods use binary trees to present protein classification. In this paper, we propose a new protein classification method, whereby theories of information and networks are used to classify the multivariate relationships of proteins. In this study, protein universe is modeled as an undirected network, where proteins are classified according to their connections. Our method is unsupervised, multivariate, and alignment-free. It can be applied to the classification of both protein sequences and structures. Nine examples are used to demonstrate the efficiency of our new method. PMID:28350835

  5. Mechanism of ascorbic acid interference in biochemical tests that use peroxide and peroxidase to generate chromophore.

    PubMed

    Martinello, Flávia; Luiz da Silva, Edson

    2006-11-01

    Ascorbic acid interferes negatively in peroxidase-based tests (Trinder method). However, the precise mechanism remains unclear for tests that use peroxide, a phenolic compound and 4-aminophenazone (4-AP). We determined the chemical mechanism of this interference, by examining the effects of ascorbic acid in the reaction kinetics of the production and reduction of the oxidized chromophore in urate, cholesterol, triglyceride and glucose tests. Reaction of ascorbic acid with the Trinder method constituents was also verified. Ascorbic acid interfered stoichiometrically with all tests studied. However, it had two distinct effects on the reaction rate. In the urate test, ascorbic acid decreased the chromophore formation with no change in its production kinetics. In contrast, in cholesterol, triglyceride and glucose tests, an increase in the lag phase of color development occurred. Of all the Trinder constituents, only peroxide reverted the interference. In addition, ascorbic acid did not interfere with oxidase activity nor reduce significantly the chromophore formed. Peroxide depletion was the predominant chemical mechanism of ascorbic acid interference in the Trinder method with phenolics and 4-AP. Distinctive effects of ascorbic acid on the reaction kinetics of urate, cholesterol, glucose and triglyceride might be due to the rate of peroxide production by oxidases.

  6. Ensemble Sparse Classification of Alzheimer’s Disease

    PubMed Central

    Liu, Manhua; Zhang, Daoqiang; Shen, Dinggang

    2012-01-01

    The high-dimensional pattern classification methods, e.g., support vector machines (SVM), have been widely investigated for analysis of structural and functional brain images (such as magnetic resonance imaging (MRI)) to assist the diagnosis of Alzheimer’s disease (AD) including its prodromal stage, i.e., mild cognitive impairment (MCI). Most existing classification methods extract features from neuroimaging data and then construct a single classifier to perform classification. However, due to noise and small sample size of neuroimaging data, it is challenging to train only a global classifier that can be robust enough to achieve good classification performance. In this paper, instead of building a single global classifier, we propose a local patch-based subspace ensemble method which builds multiple individual classifiers based on different subsets of local patches and then combines them for more accurate and robust classification. Specifically, to capture the local spatial consistency, each brain image is partitioned into a number of local patches and a subset of patches is randomly selected from the patch pool to build a weak classifier. Here, the sparse representation-based classification (SRC) method, which has shown effective for classification of image data (e.g., face), is used to construct each weak classifier. Then, multiple weak classifiers are combined to make the final decision. We evaluate our method on 652 subjects (including 198 AD patients, 225 MCI and 229 normal controls) from Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using MR images. The experimental results show that our method achieves an accuracy of 90.8% and an area under the ROC curve (AUC) of 94.86% for AD classification and an accuracy of 87.85% and an AUC of 92.90% for MCI classification, respectively, demonstrating a very promising performance of our method compared with the state-of-the-art methods for AD/MCI classification using MR images. PMID:22270352

  7. Feature ranking and rank aggregation for automatic sleep stage classification: a comparative study.

    PubMed

    Najdi, Shirin; Gharbali, Ali Abdollahi; Fonseca, José Manuel

    2017-08-18

    Nowadays, sleep quality is one of the most important measures of healthy life, especially considering the huge number of sleep-related disorders. Identifying sleep stages using polysomnographic (PSG) signals is the traditional way of assessing sleep quality. However, the manual process of sleep stage classification is time-consuming, subjective and costly. Therefore, in order to improve the accuracy and efficiency of the sleep stage classification, researchers have been trying to develop automatic classification algorithms. Automatic sleep stage classification mainly consists of three steps: pre-processing, feature extraction and classification. Since classification accuracy is deeply affected by the extracted features, a poor feature vector will adversely affect the classifier and eventually lead to low classification accuracy. Therefore, special attention should be given to the feature extraction and selection process. In this paper the performance of seven feature selection methods, as well as two feature rank aggregation methods, were compared. Pz-Oz EEG, horizontal EOG and submental chin EMG recordings of 22 healthy males and females were used. A comprehensive feature set including 49 features was extracted from these recordings. The extracted features are among the most common and effective features used in sleep stage classification from temporal, spectral, entropy-based and nonlinear categories. The feature selection methods were evaluated and compared using three criteria: classification accuracy, stability, and similarity. Simulation results show that MRMR-MID achieves the highest classification performance while Fisher method provides the most stable ranking. In our simulations, the performance of the aggregation methods was in the average level, although they are known to generate more stable results and better accuracy. The Borda and RRA rank aggregation methods could not outperform significantly the conventional feature ranking methods. Among conventional methods, some of them slightly performed better than others, although the choice of a suitable technique is dependent on the computational complexity and accuracy requirements of the user.

  8. SAPT units turn-on in an interference-dominant environment. [Stand Alone Pressure Transducer

    NASA Technical Reports Server (NTRS)

    Peng, W.-C.; Yang, C.-C.; Lichtenberg, C.

    1990-01-01

    A stand alone pressure transducer (SAPT) is a credit-card-sized smart pressure sensor inserted between the tile and the aluminum skin of a space shuttle. Reliably initiating the SAPT units via RF signals in a prelaunch environment is a challenging problem. Multiple-source interference may exist if more than one GSE (ground support equipment) antenna is turned on at the same time to meet the simultaneity requirement of 10 ms. A polygon model for orbiter, external tank, solid rocket booster, and tail service masts is used to simulate the prelaunch environment. Geometric optics is then applied to identify the coverage areas and the areas which are vulnerable to multipath and/or multiple-source interference. Simulation results show that the underside areas of an orbiter have incidence angles exceeding 80 deg. For multipath interference, both sides of the cargo bay areas are found to be vulnerable to a worst-case multipath loss exceeding 20 dB. Multiple-source interference areas are also identified. Mitigation methods for the coverage and interference problem are described. It is shown that multiple-source interference can be eliminated (or controlled) using the time-division-multiplexing method or the time-stamp approach.

  9. Ethical Perspectives on RNA Interference Therapeutics

    PubMed Central

    Ebbesen, Mette; Jensen, Thomas G.; Andersen, Svend; Pedersen, Finn Skou

    2008-01-01

    RNA interference is a mechanism for controlling normal gene expression which has recently begun to be employed as a potential therapeutic agent for a wide range of disorders, including cancer, infectious diseases and metabolic disorders. Clinical trials with RNA interference have begun. However, challenges such as off-target effects, toxicity and safe delivery methods have to be overcome before RNA interference can be considered as a conventional drug. So, if RNA interference is to be used therapeutically, we should perform a risk-benefit analysis. It is ethically relevant to perform a risk-benefit analysis since ethical obligations about not inflicting harm and promoting good are generally accepted. But the ethical issues in RNA interference therapeutics not only include a risk-benefit analysis, but also considerations about respecting the autonomy of the patient and considerations about justice with regard to the inclusion criteria for participation in clinical trials and health care allocation. RNA interference is considered a new and promising therapeutic approach, but the ethical issues of this method have not been greatly discussed, so this article analyses these issues using the bioethical theory of principles of the American bioethicists, Tom L. Beauchamp and James F. Childress. PMID:18612370

  10. Hierarchic Agglomerative Clustering Methods for Automatic Document Classification.

    ERIC Educational Resources Information Center

    Griffiths, Alan; And Others

    1984-01-01

    Considers classifications produced by application of single linkage, complete linkage, group average, and word clustering methods to Keen and Cranfield document test collections, and studies structure of hierarchies produced, extent to which methods distort input similarity matrices during classification generation, and retrieval effectiveness…

  11. Data Field Modeling and Spectral-Spatial Feature Fusion for Hyperspectral Data Classification.

    PubMed

    Liu, Da; Li, Jianxun

    2016-12-16

    Classification is a significant subject in hyperspectral remote sensing image processing. This study proposes a spectral-spatial feature fusion algorithm for the classification of hyperspectral images (HSI). Unlike existing spectral-spatial classification methods, the influences and interactions of the surroundings on each measured pixel were taken into consideration in this paper. Data field theory was employed as the mathematical realization of the field theory concept in physics, and both the spectral and spatial domains of HSI were considered as data fields. Therefore, the inherent dependency of interacting pixels was modeled. Using data field modeling, spatial and spectral features were transformed into a unified radiation form and further fused into a new feature by using a linear model. In contrast to the current spectral-spatial classification methods, which usually simply stack spectral and spatial features together, the proposed method builds the inner connection between the spectral and spatial features, and explores the hidden information that contributed to classification. Therefore, new information is included for classification. The final classification result was obtained using a random forest (RF) classifier. The proposed method was tested with the University of Pavia and Indian Pines, two well-known standard hyperspectral datasets. The experimental results demonstrate that the proposed method has higher classification accuracies than those obtained by the traditional approaches.

  12. Apparatus and method for creating a photonic densely-accumulated ray-point

    NASA Technical Reports Server (NTRS)

    Park, Yeonjoon (Inventor); Choi, Sang H. (Inventor); King, Glen C. (Inventor); Elliott, James R. (Inventor)

    2012-01-01

    An optical apparatus includes an optical diffraction device configured for diffracting a predetermined wavelength of incident light onto adjacent optical focal points, and a photon detector for detecting a spectral characteristic of the predetermined wavelength. One of the optical focal points is a constructive interference point and the other optical focal point is a destructive interference point. The diffraction device, which may be a micro-zone plate (MZP) of micro-ring gratings or an optical lens, generates a constructive ray point using phase-contrasting of the destructive interference point. The ray point is located between adjacent optical focal points. A method of generating a densely-accumulated ray point includes directing incident light onto the optical diffraction device, diffracting the selected wavelength onto the constructive interference focal point and the destructive interference focal point, and generating the densely-accumulated ray point in a narrow region.

  13. Comparison of hand-craft feature based SVM and CNN based deep learning framework for automatic polyp classification.

    PubMed

    Younghak Shin; Balasingham, Ilangko

    2017-07-01

    Colonoscopy is a standard method for screening polyps by highly trained physicians. Miss-detected polyps in colonoscopy are potential risk factor for colorectal cancer. In this study, we investigate an automatic polyp classification framework. We aim to compare two different approaches named hand-craft feature method and convolutional neural network (CNN) based deep learning method. Combined shape and color features are used for hand craft feature extraction and support vector machine (SVM) method is adopted for classification. For CNN approach, three convolution and pooling based deep learning framework is used for classification purpose. The proposed framework is evaluated using three public polyp databases. From the experimental results, we have shown that the CNN based deep learning framework shows better classification performance than the hand-craft feature based methods. It achieves over 90% of classification accuracy, sensitivity, specificity and precision.

  14. Behavior Based Social Dimensions Extraction for Multi-Label Classification

    PubMed Central

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849

  15. Site Classification using Multichannel Channel Analysis of Surface Wave (MASW) method on Soft and Hard Ground

    NASA Astrophysics Data System (ADS)

    Ashraf, M. A. M.; Kumar, N. S.; Yusoh, R.; Hazreek, Z. A. M.; Aziman, M.

    2018-04-01

    Site classification utilizing average shear wave velocity (Vs(30) up to 30 meters depth is a typical parameter. Numerous geophysical methods have been proposed for estimation of shear wave velocity by utilizing assortment of testing configuration, processing method, and inversion algorithm. Multichannel Analysis of Surface Wave (MASW) method is been rehearsed by numerous specialist and professional to geotechnical engineering for local site characterization and classification. This study aims to determine the site classification on soft and hard ground using MASW method. The subsurface classification was made utilizing National Earthquake Hazards Reduction Program (NERHP) and international Building Code (IBC) classification. Two sites are chosen to acquire the shear wave velocity which is in the state of Pulau Pinang for soft soil and Perlis for hard rock. Results recommend that MASW technique can be utilized to spatially calculate the distribution of shear wave velocity (Vs(30)) in soil and rock to characterize areas.

  16. Testing Multivariate Adaptive Regression Splines (MARS) as a Method of Land Cover Classification of TERRA-ASTER Satellite Images.

    PubMed

    Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora

    2009-01-01

    This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test.

  17. LASER BIOLOGY: Peculiarities of studying an isolated neuron by the method of laser interference microscopy

    NASA Astrophysics Data System (ADS)

    Yusipovich, Alexander I.; Novikov, Sergey M.; Kazakova, Tatiana A.; Erokhova, Liudmila A.; Brazhe, Nadezda A.; Lazarev, Grigory L.; Maksimov, Georgy V.

    2006-09-01

    Actual aspects of using a new method of laser interference microscopy (LIM) for studying nerve cells are discussed. The peculiarities of the LIM display of neurons are demonstrated by the example of isolated neurons of a pond snail Lymnaea stagnalis. A comparative analysis of the images of the cell and subcellular structures of a neuron obtained by the methods of interference microscopy, optical transmission microscopy, and confocal microscopy is performed. Various aspects of the application of LIM for studying the lateral dimensions and internal structure of the cytoplasm and organelles of a neuron in cytology and cell physiology are discussed.

  18. Quantitative DIC microscopy using an off-axis self-interference approach.

    PubMed

    Fu, Dan; Oh, Seungeun; Choi, Wonshik; Yamauchi, Toyohiko; Dorn, August; Yaqoob, Zahid; Dasari, Ramachandra R; Feld, Michael S

    2010-07-15

    Traditional Normarski differential interference contrast (DIC) microscopy is a very powerful method for imaging nonstained biological samples. However, one of its major limitations is the nonquantitative nature of the imaging. To overcome this problem, we developed a quantitative DIC microscopy method based on off-axis sample self-interference. The digital holography algorithm is applied to obtain quantitative phase gradients in orthogonal directions, which leads to a quantitative phase image through a spiral integration of the phase gradients. This method is practically simple to implement on any standard microscope without stringent requirements on polarization optics. Optical sectioning can be obtained through enlarged illumination NA.

  19. Statistical methods and neural network approaches for classification of data from multiple sources

    NASA Technical Reports Server (NTRS)

    Benediktsson, Jon Atli; Swain, Philip H.

    1990-01-01

    Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.

  20. Doppler Radar Vital Signs Detection Method Based on Higher Order Cyclostationary.

    PubMed

    Yu, Zhibin; Zhao, Duo; Zhang, Zhiqiang

    2017-12-26

    Due to the non-contact nature, using Doppler radar sensors to detect vital signs such as heart and respiration rates of a human subject is getting more and more attention. However, the related detection-method research meets lots of challenges due to electromagnetic interferences, clutter and random motion interferences. In this paper, a novel third-order cyclic cummulant (TOCC) detection method, which is insensitive to Gaussian interference and non-cyclic signals, is proposed to investigate the heart and respiration rate based on continuous wave Doppler radars. The k -th order cyclostationary properties of the radar signal with hidden periodicities and random motions are analyzed. The third-order cyclostationary detection theory of the heart and respiration rate is studied. Experimental results show that the third-order cyclostationary approach has better estimation accuracy for detecting the vital signs from the received radar signal under low SNR, strong clutter noise and random motion interferences.

  1. Computationally Efficient Radio Frequency Source Localization for Radio Interferometric Arrays

    NASA Astrophysics Data System (ADS)

    Steeb, J.-W.; Davidson, David B.; Wijnholds, Stefan J.

    2018-03-01

    Radio frequency interference (RFI) is an ever-increasing problem for remote sensing and radio astronomy, with radio telescope arrays especially vulnerable to RFI. Localizing the RFI source is the first step to dealing with the culprit system. In this paper, a new localization algorithm for interferometric arrays with low array beam sidelobes is presented. The algorithm has been adapted to work both in the near field and far field (only the direction of arrival can be recovered when the source is in the far field). In the near field the computational complexity of the algorithm is linear with search grid size compared to cubic scaling of the state-of-the-art 3-D MUltiple SIgnal Classification (MUSIC) method. The new method is as accurate as 3-D MUSIC. The trade-off is that the proposed algorithm requires a once-off a priori calculation and storing of weighting matrices. The accuracy of the algorithm is validated using data generated by low-frequency array while a hexacopter was flying around it and broadcasting a continuous-wave signal. For the flight, the mean distance between the differential GPS positions and the corresponding estimated positions of the hexacopter is 2 m at a wavelength of 6.7 m.

  2. Enzyme immunoassays for IgG and IgM antibodies to Toxoplasma gondii based on enhanced chemiluminescence.

    PubMed Central

    Crouch, C F

    1995-01-01

    AIMS--To evaluate the clinical performance of enzyme immunoassays for IgG and IgM antibodies to Toxoplasma gondii based on enhanced chemiluminescence. METHODS--Classification of routine clinical samples from the originating laboratories was compared with that obtained using the chemiluminescence based assays. Resolution of discordant results was achieved by testing in alternative enzyme immunoassays (IgM) or by an independent laboratory using the dye test (IgG). RESULTS--Compared with resolved data, the IgM assay was found to be highly specific (100%) with a cut off selected to give optimal performance with respect to both the early detection of specific IgM and the detection of persistent levels of specific IgM (sensitivity 98%). Compared with resolved data, the IgG assay was shown to have a sensitivity and a specificity of 99.4%. CONCLUSIONS--The Amerlite Toxo IgM assay possesses high levels of sensitivity and specificity. Assay interference due to rheumatoid factor like substances is not a problem. The Amerlite Toxo IgG assay possesses good sensitivity and specificity, but is less sensitive for the detection of seroconversion than methods detecting both IgG and IgM. PMID:7560174

  3. Evaluation of gene expression classification studies: factors associated with classification performance.

    PubMed

    Novianti, Putri W; Roes, Kit C B; Eijkemans, Marinus J C

    2014-01-01

    Classification methods used in microarray studies for gene expression are diverse in the way they deal with the underlying complexity of the data, as well as in the technique used to build the classification model. The MAQC II study on cancer classification problems has found that performance was affected by factors such as the classification algorithm, cross validation method, number of genes, and gene selection method. In this paper, we study the hypothesis that the disease under study significantly determines which method is optimal, and that additionally sample size, class imbalance, type of medical question (diagnostic, prognostic or treatment response), and microarray platform are potentially influential. A systematic literature review was used to extract the information from 48 published articles on non-cancer microarray classification studies. The impact of the various factors on the reported classification accuracy was analyzed through random-intercept logistic regression. The type of medical question and method of cross validation dominated the explained variation in accuracy among studies, followed by disease category and microarray platform. In total, 42% of the between study variation was explained by all the study specific and problem specific factors that we studied together.

  4. Automated Feature Identification and Classification Using Automated Feature Weighted Self Organizing Map (FWSOM)

    NASA Astrophysics Data System (ADS)

    Starkey, Andrew; Usman Ahmad, Aliyu; Hamdoun, Hassan

    2017-10-01

    This paper investigates the application of a novel method for classification called Feature Weighted Self Organizing Map (FWSOM) that analyses the topology information of a converged standard Self Organizing Map (SOM) to automatically guide the selection of important inputs during training for improved classification of data with redundant inputs, examined against two traditional approaches namely neural networks and Support Vector Machines (SVM) for the classification of EEG data as presented in previous work. In particular, the novel method looks to identify the features that are important for classification automatically, and in this way the important features can be used to improve the diagnostic ability of any of the above methods. The paper presents the results and shows how the automated identification of the important features successfully identified the important features in the dataset and how this results in an improvement of the classification results for all methods apart from linear discriminatory methods which cannot separate the underlying nonlinear relationship in the data. The FWSOM in addition to achieving higher classification accuracy has given insights into what features are important in the classification of each class (left and right-hand movements), and these are corroborated by already published work in this area.

  5. ARSENIC DETERMINATION BY THE SILVER DIETHYLDITHIOCARBAMATE METHOD AND THE ELIMINATION OF METAL ION INTERFERENCE

    EPA Science Inventory

    The interference of metals with the determination of arsenic by the silver diethyldithiocarbamate (SDDC) Method was investigated. Low recoveries of arsenic are obtained when cobalt, chromium, molybdenum, nitrate, nickel or phosphate are at concentrations of 7 mg/l or above (indiv...

  6. Using methods from the data mining and machine learning literature for disease classification and prediction: A case study examining classification of heart failure sub-types

    PubMed Central

    Austin, Peter C.; Tu, Jack V.; Ho, Jennifer E.; Levy, Daniel; Lee, Douglas S.

    2014-01-01

    Objective Physicians classify patients into those with or without a specific disease. Furthermore, there is often interest in classifying patients according to disease etiology or subtype. Classification trees are frequently used to classify patients according to the presence or absence of a disease. However, classification trees can suffer from limited accuracy. In the data-mining and machine learning literature, alternate classification schemes have been developed. These include bootstrap aggregation (bagging), boosting, random forests, and support vector machines. Study design and Setting We compared the performance of these classification methods with those of conventional classification trees to classify patients with heart failure according to the following sub-types: heart failure with preserved ejection fraction (HFPEF) vs. heart failure with reduced ejection fraction (HFREF). We also compared the ability of these methods to predict the probability of the presence of HFPEF with that of conventional logistic regression. Results We found that modern, flexible tree-based methods from the data mining literature offer substantial improvement in prediction and classification of heart failure sub-type compared to conventional classification and regression trees. However, conventional logistic regression had superior performance for predicting the probability of the presence of HFPEF compared to the methods proposed in the data mining literature. Conclusion The use of tree-based methods offers superior performance over conventional classification and regression trees for predicting and classifying heart failure subtypes in a population-based sample of patients from Ontario. However, these methods do not offer substantial improvements over logistic regression for predicting the presence of HFPEF. PMID:23384592

  7. A new adaptive L1-norm for optimal descriptor selection of high-dimensional QSAR classification model for anti-hepatitis C virus activity of thiourea derivatives.

    PubMed

    Algamal, Z Y; Lee, M H

    2017-01-01

    A high-dimensional quantitative structure-activity relationship (QSAR) classification model typically contains a large number of irrelevant and redundant descriptors. In this paper, a new design of descriptor selection for the QSAR classification model estimation method is proposed by adding a new weight inside L1-norm. The experimental results of classifying the anti-hepatitis C virus activity of thiourea derivatives demonstrate that the proposed descriptor selection method in the QSAR classification model performs effectively and competitively compared with other existing penalized methods in terms of classification performance on both the training and the testing datasets. Moreover, it is noteworthy that the results obtained in terms of stability test and applicability domain provide a robust QSAR classification model. It is evident from the results that the developed QSAR classification model could conceivably be employed for further high-dimensional QSAR classification studies.

  8. Improved Phase-Mask Fabrication of Fiber Bragg Gratings

    NASA Technical Reports Server (NTRS)

    Grant, Joseph; Wang, Ying; Sharma, Anup

    2004-01-01

    An improved method of fabrication of Bragg gratings in optical fibers combines the best features of two prior methods: one that involves the use of a phase mask and one that involves interference between the two coherent laser beams. The improved method affords flexibility for tailoring Bragg wavelengths and bandwidths over wide ranges. A Bragg grating in an optical fiber is a periodic longitudinal variation in the index of refraction of the fiber core. The spatial period (Bragg wavelength) is chosen to obtain enhanced reflection of light of a given wavelength that would otherwise propagate relatively unimpeded along the core. Optionally, the spatial period of the index modulation can be made to vary gradually along the grating (such a grating is said to be chirped ) in order to obtain enhanced reflection across a wavelength band, the width of which is determined by the difference between the maximum and minimum Bragg wavelengths. In the present method as in both prior methods, a Bragg grating is formed by exposing an optical fiber to an ultraviolet-light interference field. The Bragg grating coincides with the pattern of exposure of the fiber core to ultraviolet light; in other words, the Bragg grating coincides with the interference fringes. Hence, the problem of tailoring the Bragg wavelength and bandwidth is largely one of tailoring the interference pattern and the placement of the fiber in the interference pattern. In the prior two-beam interferometric method, a single laser beam is split into two beams, which are subsequently recombined to produce an interference pattern at the location of an optical fiber. In the prior phase-mask method, a phase mask is used to diffract a laser beam mainly into two first orders, the interference between which creates the pattern to which an optical fiber is exposed. The prior two-beam interferometric method offers the advantage that the period of the interference pattern can be adjusted to produce gratings over a wide range of Bragg wavelengths, but offers the disadvantage that success depends on precise alignment and high mechanical stability. The prior phase-mask method affords the advantages of compactness of equipment and relative insensitivity to both misalignment and vibration, but does not afford adjustability of the Bragg wavelength. The present method affords both the flexibility of the prior two-beam interferometric method and the compactness and stability of the prior phase-mask method. In this method (see figure), a laser beam propagating along the x axis is normally incident on a phase mask that lies in the (y,z) plane. The phase of light propagating through the mask is modulated with a spatial periodicity, p, along the y axis chosen to diffract the laser light primarily to first order at the angle . (The zero-order laser light propagating along the x axis can be used for alignment and thereafter suppressed during exposure of the fiber.) The diffracted light passes through a concave cylindrical lens, which converts the flat diffracted wave fronts to cylindrical ones, as though the light emanated from a line source. Then two parallel flat mirrors recombine the diffracted beams to form an interference field equivalent to that of two coherent line sources at positions A and B (virtual sources). The interference pattern is a known function of the parameters of the apparatus and of position (x,y) in the interference field. Hence, the tilt, wavelength, and chirp of the Bragg grating can be chosen through suitable adjustments of the apparatus and/or of the position and orientation of the optical fiber. In particular, the Bragg wavelength can be adjusted by moving the fiber along the x axis, and the bandwidth can be modified over a wide range by changing the fiber tilt angle or by moving the phase mask and/or the fiber. Alignment is easy because the zero-order beam defines the x axis. The interference is relatively stable and insensitive to the mechanical vibration because of the gh symmetry and compactness of the apparatus, the fixed positions of the mirrors and lens, and the consequent fixed positions of the two virtual line sources, which are independent of the translations of the phase mask and the laser relative to the lens.

  9. Effect of hemoglobin- and Perflubron-based oxygen carriers on common clinical laboratory tests.

    PubMed

    Ma, Z; Monk, T G; Goodnough, L T; McClellan, A; Gawryl, M; Clark, T; Moreira, P; Keipert, P E; Scott, M G

    1997-09-01

    Polymerized hemoglobin solutions (Hb-based oxygen carriers; HBOCs) and a second-generation perfluorocarbon (PFC) emulsion (Perflubron) are in clinical trials as temporary oxygen carriers ("blood substitutes"). Plasma and serum samples from patients receiving HBOCs look markedly red, whereas those from patients receiving PFC appear to be lipemic. Because hemolysis and lipemia are well-known interferents in many assays, we examined the effects of these substances on clinical chemistry, immunoassay, therapeutic drug, and coagulation tests. HBOC concentrations up to 50 g/L caused essentially no interference for Na, K, Cl, urea, total CO2, P, uric acid, Mg, creatinine, and glucose values determined by the Hitachi 747 or Vitros 750 analyzers (or both) or for immunoassays of lidocaine, N-acetylprocainamide, procainamide, digoxin, phenytoin, quinidine, or theophylline performed on the Abbott AxSym or TDx. Gentamycin and vancomycin assays on the AxSym exhibited a significant positive and negative interference, respectively. Immunoassays for TSH on the Abbott IMx and for troponin I on the Dade Stratus were unaffected by HBOC at this concentration. Tests for total protein, albumin, LDH, AST, ALT, GGT, amylase, lipase, and cholesterol were significantly affected to various extents at different HBOC concentrations on the Hitachi 747 and Vitros 750. The CK-MB assay on the Stratus exhibited a negative interference at 5 g/L HBOC. HBOC interference in coagulation tests was method-dependent-fibrometer-based methods on the BBL Fibro System were free from interference, but optical-based methods on the MLA 1000C exhibited interferences at 20 g/L HBOC. A 1:20 dilution of the PFC-based oxygen carrier (600 g/L) caused no interference on any of these chemistry or immunoassay tests except for amylase and ammonia on the Vitros 750 and plasma iron on the Hitachi 747.

  10. Interference lithography for optical devices and coatings

    NASA Astrophysics Data System (ADS)

    Juhl, Abigail Therese

    Interference lithography can create large-area, defect-free nanostructures with unique optical properties. In this thesis, interference lithography will be utilized to create photonic crystals for functional devices or coatings. For instance, typical lithographic processing techniques were used to create 1, 2 and 3 dimensional photonic crystals in SU8 photoresist. These structures were in-filled with birefringent liquid crystal to make active devices, and the orientation of the liquid crystal directors within the SU8 matrix was studied. Most of this thesis will be focused on utilizing polymerization induced phase separation as a single-step method for fabrication by interference lithography. For example, layered polymer/nanoparticle composites have been created through the one-step two-beam interference lithographic exposure of a dispersion of 25 and 50 nm silica particles within a photopolymerizable mixture at a wavelength of 532 nm. In the areas of constructive interference, the monomer begins to polymerize via a free-radical process and concurrently the nanoparticles move into the regions of destructive interference. The holographic exposure of the particles within the monomer resin offers a single-step method to anisotropically structure the nanoconstituents within a composite. A one-step holographic exposure was also used to fabricate self-healing coatings that use water from the environment to catalyze polymerization. Polymerization induced phase separation was used to sequester an isocyanate monomer within an acrylate matrix. Due to the periodic modulation of the index of refraction between the monomer and polymer, the coating can reflect a desired wavelength, allowing for tunable coloration. When the coating is scratched, polymerization of the liquid isocyanate is catalyzed by moisture in air; if the indices of the two polymers are matched, the coatings turn transparent after healing. Interference lithography offers a method of creating multifunctional self-healing coatings that readout when damage has occurred.

  11. Classification of Clouds in Satellite Imagery Using Adaptive Fuzzy Sparse Representation.

    PubMed

    Jin, Wei; Gong, Fei; Zeng, Xingbin; Fu, Randi

    2016-12-16

    Automatic cloud detection and classification using satellite cloud imagery have various meteorological applications such as weather forecasting and climate monitoring. Cloud pattern analysis is one of the research hotspots recently. Since satellites sense the clouds remotely from space, and different cloud types often overlap and convert into each other, there must be some fuzziness and uncertainty in satellite cloud imagery. Satellite observation is susceptible to noises, while traditional cloud classification methods are sensitive to noises and outliers; it is hard for traditional cloud classification methods to achieve reliable results. To deal with these problems, a satellite cloud classification method using adaptive fuzzy sparse representation-based classification (AFSRC) is proposed. Firstly, by defining adaptive parameters related to attenuation rate and critical membership, an improved fuzzy membership is introduced to accommodate the fuzziness and uncertainty of satellite cloud imagery; secondly, by effective combination of the improved fuzzy membership function and sparse representation-based classification (SRC), atoms in training dictionary are optimized; finally, an adaptive fuzzy sparse representation classifier for cloud classification is proposed. Experiment results on FY-2G satellite cloud image show that, the proposed method not only improves the accuracy of cloud classification, but also has strong stability and adaptability with high computational efficiency.

  12. Land use/cover classification in the Brazilian Amazon using satellite images.

    PubMed

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant'anna, Sidnei João Siqueira

    2012-09-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data.

  13. Land use/cover classification in the Brazilian Amazon using satellite images

    PubMed Central

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant’Anna, Sidnei João Siqueira

    2013-01-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data. PMID:24353353

  14. Young's double-slit interference with two-color biphotons.

    PubMed

    Zhang, De-Jian; Wu, Shuang; Li, Hong-Guo; Wang, Hai-Bo; Xiong, Jun; Wang, Kaige

    2017-12-12

    In classical optics, Young's double-slit experiment with colored coherent light gives rise to individual interference fringes for each light frequency, referring to single-photon interference. However, two-photon double-slit interference has been widely studied only for wavelength-degenerate biphoton, known as subwavelength quantum lithography. In this work, we report double-slit interference experiments with two-color biphoton. Different from the degenerate case, the experimental results depend on the measurement methods. From a two-axis coincidence measurement pattern we can extract complete interference information about two colors. The conceptual model provides an intuitional picture of the in-phase and out-of-phase photon correlations and a complete quantum understanding about the which-path information of two colored photons.

  15. Automated interference tools of the All-Russian Research Institute for Optical and Physical Measurements

    NASA Astrophysics Data System (ADS)

    Vishnyakov, G. N.; Levin, G. G.; Minaev, V. L.

    2017-09-01

    A review of advanced equipment for automated interference measurements developed at the All-Russian Research Institute for Optical and Physical Measurements is given. Three types of interference microscopes based on the Linnik, Twyman-Green, and Fizeau interferometers with the use of the phase stepping method are presented.

  16. Couple Graph Based Label Propagation Method for Hyperspectral Remote Sensing Data Classification

    NASA Astrophysics Data System (ADS)

    Wang, X. P.; Hu, Y.; Chen, J.

    2018-04-01

    Graph based semi-supervised classification method are widely used for hyperspectral image classification. We present a couple graph based label propagation method, which contains both the adjacency graph and the similar graph. We propose to construct the similar graph by using the similar probability, which utilize the label similarity among examples probably. The adjacency graph was utilized by a common manifold learning method, which has effective improve the classification accuracy of hyperspectral data. The experiments indicate that the couple graph Laplacian which unite both the adjacency graph and the similar graph, produce superior classification results than other manifold Learning based graph Laplacian and Sparse representation based graph Laplacian in label propagation framework.

  17. Comparison of Single and Multi-Scale Method for Leaf and Wood Points Classification from Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Wei, Hongqiang; Zhou, Guiyun; Zhou, Junjie

    2018-04-01

    The classification of leaf and wood points is an essential preprocessing step for extracting inventory measurements and canopy characterization of trees from the terrestrial laser scanning (TLS) data. The geometry-based approach is one of the widely used classification method. In the geometry-based method, it is common practice to extract salient features at one single scale before the features are used for classification. It remains unclear how different scale(s) used affect the classification accuracy and efficiency. To assess the scale effect on the classification accuracy and efficiency, we extracted the single-scale and multi-scale salient features from the point clouds of two oak trees of different sizes and conducted the classification on leaf and wood. Our experimental results show that the balanced accuracy of the multi-scale method is higher than the average balanced accuracy of the single-scale method by about 10 % for both trees. The average speed-up ratio of single scale classifiers over multi-scale classifier for each tree is higher than 30.

  18. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    PubMed

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. HEp-2 cell image classification method based on very deep convolutional networks with small datasets

    NASA Astrophysics Data System (ADS)

    Lu, Mengchi; Gao, Long; Guo, Xifeng; Liu, Qiang; Yin, Jianping

    2017-07-01

    Human Epithelial-2 (HEp-2) cell images staining patterns classification have been widely used to identify autoimmune diseases by the anti-Nuclear antibodies (ANA) test in the Indirect Immunofluorescence (IIF) protocol. Because manual test is time consuming, subjective and labor intensive, image-based Computer Aided Diagnosis (CAD) systems for HEp-2 cell classification are developing. However, methods proposed recently are mostly manual features extraction with low accuracy. Besides, the scale of available benchmark datasets is small, which does not exactly suitable for using deep learning methods. This issue will influence the accuracy of cell classification directly even after data augmentation. To address these issues, this paper presents a high accuracy automatic HEp-2 cell classification method with small datasets, by utilizing very deep convolutional networks (VGGNet). Specifically, the proposed method consists of three main phases, namely image preprocessing, feature extraction and classification. Moreover, an improved VGGNet is presented to address the challenges of small-scale datasets. Experimental results over two benchmark datasets demonstrate that the proposed method achieves superior performance in terms of accuracy compared with existing methods.

  20. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.

    PubMed

    Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun

    2016-01-01

    Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.

  1. Spatial-spectral blood cell classification with microscopic hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Ran, Qiong; Chang, Lan; Li, Wei; Xu, Xiaofeng

    2017-10-01

    Microscopic hyperspectral images provide a new way for blood cell examination. The hyperspectral imagery can greatly facilitate the classification of different blood cells. In this paper, the microscopic hyperspectral images are acquired by connecting the microscope and the hyperspectral imager, and then tested for blood cell classification. For combined use of the spectral and spatial information provided by hyperspectral images, a spatial-spectral classification method is improved from the classical extreme learning machine (ELM) by integrating spatial context into the image classification task with Markov random field (MRF) model. Comparisons are done among ELM, ELM-MRF, support vector machines(SVM) and SVMMRF methods. Results show the spatial-spectral classification methods(ELM-MRF, SVM-MRF) perform better than pixel-based methods(ELM, SVM), and the proposed ELM-MRF has higher precision and show more accurate location of cells.

  2. Proactive Interference Does Not Meaningfully Distort Visual Working Memory Capacity Estimates in the Canonical Change Detection Task

    PubMed Central

    Lin, Po-Han; Luck, Steven J.

    2012-01-01

    The change detection task has become a standard method for estimating the storage capacity of visual working memory. Most researchers assume that this task isolates the properties of an active short-term storage system that can be dissociated from long-term memory systems. However, long-term memory storage may influence performance on this task. In particular, memory traces from previous trials may create proactive interference that sometimes leads to errors, thereby reducing estimated capacity. Consequently, the capacity of visual working memory may be higher than is usually thought, and correlations between capacity and other measures of cognition may reflect individual differences in proactive interference rather than individual differences in the capacity of the short-term storage system. Indeed, previous research has shown that change detection performance can be influenced by proactive interference under some conditions. The purpose of the present study was to determine whether the canonical version of the change detection task – in which the to-be-remembered information consists of simple, briefly presented features – is influenced by proactive interference. Two experiments were conducted using methods that ordinarily produce substantial evidence of proactive interference, but no proactive interference was observed. Thus, the canonical version of the change detection task can be used to assess visual working memory capacity with no meaningful influence of proactive interference. PMID:22403556

  3. Proactive interference does not meaningfully distort visual working memory capacity estimates in the canonical change detection task.

    PubMed

    Lin, Po-Han; Luck, Steven J

    2012-01-01

    The change detection task has become a standard method for estimating the storage capacity of visual working memory. Most researchers assume that this task isolates the properties of an active short-term storage system that can be dissociated from long-term memory systems. However, long-term memory storage may influence performance on this task. In particular, memory traces from previous trials may create proactive interference that sometimes leads to errors, thereby reducing estimated capacity. Consequently, the capacity of visual working memory may be higher than is usually thought, and correlations between capacity and other measures of cognition may reflect individual differences in proactive interference rather than individual differences in the capacity of the short-term storage system. Indeed, previous research has shown that change detection performance can be influenced by proactive interference under some conditions. The purpose of the present study was to determine whether the canonical version of the change detection task - in which the to-be-remembered information consists of simple, briefly presented features - is influenced by proactive interference. Two experiments were conducted using methods that ordinarily produce substantial evidence of proactive interference, but no proactive interference was observed. Thus, the canonical version of the change detection task can be used to assess visual working memory capacity with no meaningful influence of proactive interference.

  4. High-chroma visual cryptography using interference color of high-order retarder films

    NASA Astrophysics Data System (ADS)

    Sugawara, Shiori; Harada, Kenji; Sakai, Daisuke

    2015-08-01

    Visual cryptography can be used as a method of sharing a secret image through several encrypted images. Conventional visual cryptography can display only monochrome images. We have developed a high-chroma color visual encryption technique using the interference color of high-order retarder films. The encrypted films are composed of a polarizing film and retarder films. The retarder films exhibit interference color when they are sandwiched between two polarizing films. We propose a stacking technique for displaying high-chroma interference color images. A prototype visual cryptography device using high-chroma interference color is developed.

  5. Public access defibrillation: Suppression of 16.7 Hz interference generated by the power supply of the railway systems

    PubMed Central

    Christov, Ivaylo I; Iliev, Georgi L

    2005-01-01

    Background A specific problem using the public access defibrillators (PADs) arises at the railway stations. Some countries as Germany, Austria, Switzerland, Norway and Sweden are using AC railroad net power-supply system with rated 16.7 Hz frequency modulated from 15.69 Hz to 17.36 Hz. The power supply frequency contaminates the electrocardiogram (ECG). It is difficult to be suppressed or eliminated due to the fact that it considerably overlaps the frequency spectra of the ECG. The interference impedes the automated decision of the PADs whether a patient should be (or should not be) shocked. The aim of this study is the suppression of the 16.7 Hz interference generated by the power supply of the railway systems. Methods Software solution using adaptive filtering method was proposed for 16.7 Hz interference suppression. The optimal performance of the filter is achieved, embedding a reference channel in the PADs to record the interference. The method was tested with ECGs from AHA database. Results The method was tested with patients of normal sinus rhythms, symptoms of tachycardia and ventricular fibrillation. Simulated interference with frequency modulation from 15.69 Hz to 17.36 Hz changing at a rate of 2% per second was added to the ECGs, and then processed by the suggested adaptive filtering. The method totally suppresses the noise with no visible distortions of the original signals. Conclusion The proposed adaptive filter for noise suppression generated by the power supply of the railway systems has a simple structure requiring a low level of computational resources, but a good reference signal as well. PMID:15766390

  6. Direct Evidence of Acetaminophen Interference with Subcutaneous Glucose Sensing in Humans: A Pilot Study

    PubMed Central

    Basu, Ananda; Veettil, Sona; Dyer, Roy; Peyser, Thomas

    2016-01-01

    Abstract Background: Recent advances in accuracy and reliability of continuous glucose monitoring (CGM) devices have focused renewed interest on the use of such technology for therapeutic dosing of insulin without the need for independent confirmatory blood glucose meter measurements. An important issue that remains is the susceptibility of CGM devices to erroneous readings in the presence of common pharmacologic interferences. We report on a new method of assessing CGM sensor error to pharmacologic interferences using the example of oral administration of acetaminophen. Materials and Methods: We examined the responses of several different Food and Drug Administration–approved and commercially available CGM systems (Dexcom [San Diego, CA] Seven® Plus™, Medtronic Diabetes [Northridge, CA] Guardian®, and Dexcom G4® Platinum) to oral acetaminophen in 10 healthy volunteers without diabetes. Microdialysis catheters were placed in the abdominal subcutaneous tissue. Blood and microdialysate samples were collected periodically and analyzed for glucose and acetaminophen concentrations before and after oral ingestion of 1 g of acetaminophen. We compared the response of CGM sensors with the measured acetaminophen concentrations in the blood and interstitial fluid. Results: Although plasma glucose concentrations remained constant at approximately 90 mg/dL (approximately 5 mM) throughout the study, CGM glucose measurements varied between approximately 85 to 400 mg/dL (from approximately 5 to 22 mM) due to interference from the acetaminophen. The temporal profile of CGM interference followed acetaminophen concentrations measured in interstitial fluid (ISF). Conclusions: This is the first direct measurement of ISF concentrations of putative CGM interferences with simultaneous measurements of CGM performance in the presence of the interferences. The observed interference with glucose measurements in the tested CGM devices coincided temporally with appearance of acetaminophen in the ISF. The method applied here can be used to determine the susceptibility of current and future CGM systems to interference from acetaminophen or other exogenous pharmacologic agents. PMID:26784129

  7. Image Classification Using Biomimetic Pattern Recognition with Convolutional Neural Networks Features

    PubMed Central

    Huo, Guanying

    2017-01-01

    As a typical deep-learning model, Convolutional Neural Networks (CNNs) can be exploited to automatically extract features from images using the hierarchical structure inspired by mammalian visual system. For image classification tasks, traditional CNN models employ the softmax function for classification. However, owing to the limited capacity of the softmax function, there are some shortcomings of traditional CNN models in image classification. To deal with this problem, a new method combining Biomimetic Pattern Recognition (BPR) with CNNs is proposed for image classification. BPR performs class recognition by a union of geometrical cover sets in a high-dimensional feature space and therefore can overcome some disadvantages of traditional pattern recognition. The proposed method is evaluated on three famous image classification benchmarks, that is, MNIST, AR, and CIFAR-10. The classification accuracies of the proposed method for the three datasets are 99.01%, 98.40%, and 87.11%, respectively, which are much higher in comparison with the other four methods in most cases. PMID:28316614

  8. Toward optimal feature and time segment selection by divergence method for EEG signals classification.

    PubMed

    Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing

    2018-06-01

    Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Remote sensing imagery classification using multi-objective gravitational search algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Aizhu; Sun, Genyun; Wang, Zhenjie

    2016-10-01

    Simultaneous optimization of different validity measures can capture different data characteristics of remote sensing imagery (RSI) and thereby achieving high quality classification results. In this paper, two conflicting cluster validity indices, the Xie-Beni (XB) index and the fuzzy C-means (FCM) (Jm) measure, are integrated with a diversity-enhanced and memory-based multi-objective gravitational search algorithm (DMMOGSA) to present a novel multi-objective optimization based RSI classification method. In this method, the Gabor filter method is firstly implemented to extract texture features of RSI. Then, the texture features are syncretized with the spectral features to construct the spatial-spectral feature space/set of the RSI. Afterwards, cluster of the spectral-spatial feature set is carried out on the basis of the proposed method. To be specific, cluster centers are randomly generated initially. After that, the cluster centers are updated and optimized adaptively by employing the DMMOGSA. Accordingly, a set of non-dominated cluster centers are obtained. Therefore, numbers of image classification results of RSI are produced and users can pick up the most promising one according to their problem requirements. To quantitatively and qualitatively validate the effectiveness of the proposed method, the proposed classification method was applied to classifier two aerial high-resolution remote sensing imageries. The obtained classification results are compared with that produced by two single cluster validity index based and two state-of-the-art multi-objective optimization algorithms based classification results. Comparison results show that the proposed method can achieve more accurate RSI classification.

  10. Molecular cancer classification using a meta-sample-based regularized robust coding method.

    PubMed

    Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen

    2014-01-01

    Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.

  11. A spectrum fractal feature classification algorithm for agriculture crops with hyper spectrum image

    NASA Astrophysics Data System (ADS)

    Su, Junying

    2011-11-01

    A fractal dimension feature analysis method in spectrum domain for hyper spectrum image is proposed for agriculture crops classification. Firstly, a fractal dimension calculation algorithm in spectrum domain is presented together with the fast fractal dimension value calculation algorithm using the step measurement method. Secondly, the hyper spectrum image classification algorithm and flowchart is presented based on fractal dimension feature analysis in spectrum domain. Finally, the experiment result of the agricultural crops classification with FCL1 hyper spectrum image set with the proposed method and SAM (spectral angle mapper). The experiment results show it can obtain better classification result than the traditional SAM feature analysis which can fulfill use the spectrum information of hyper spectrum image to realize precision agricultural crops classification.

  12. Engineering calculations for communications satellite systems planning

    NASA Technical Reports Server (NTRS)

    Walton, E.; Aebker, E.; Mata, F.; Reilly, C.

    1991-01-01

    The final phase of a satellite synthesis project is described. Several methods for generating satellite positionings with improved aggregate carrier to interference characteristics were studied. Two general methods for modifying required separation values are presented. Also, two methods for improving aggregate carrier to interference (C/I) performance of given satellite synthesis solutions are presented. A perturbation of the World Administrative Radio Conference (WARC) synthesis is presented.

  13. Rapid method for protein quantitation by Bradford assay after elimination of the interference of polysorbate 80.

    PubMed

    Cheng, Yongfeng; Wei, Haiming; Sun, Rui; Tian, Zhigang; Zheng, Xiaodong

    2016-02-01

    Bradford assay is one of the most common methods for measuring protein concentrations. However, some pharmaceutical excipients, such as detergents, interfere with Bradford assay even at low concentrations. Protein precipitation can be used to overcome sample incompatibility with protein quantitation. But the rate of protein recovery caused by acetone precipitation is only about 70%. In this study, we found that sucrose not only could increase the rate of protein recovery after 1 h acetone precipitation, but also did not interfere with Bradford assay. So we developed a method for rapid protein quantitation in protein drugs even if they contained interfering substances. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Peculiarities of studying an isolated neuron by the method of laser interference microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yusipovich, Alexander I; Kazakova, Tatiana A; Erokhova, Liudmila A

    2006-09-30

    Actual aspects of using a new method of laser interference microscopy (LIM) for studying nerve cells are discussed. The peculiarities of the LIM display of neurons are demonstrated by the example of isolated neurons of a pond snail Lymnaea stagnalis. A comparative analysis of the images of the cell and subcellular structures of a neuron obtained by the methods of interference microscopy, optical transmission microscopy, and confocal microscopy is performed. Various aspects of the application of LIM for studying the lateral dimensions and internal structure of the cytoplasm and organelles of a neuron in cytology and cell physiology are discussed.more » (laser biology)« less

  15. Nonlinear transonic Wall-Interference Assessment/Correction (WIAC) procedures and application to cast-10 airfoil results from the NASA 0.3-m TCT 8- by 24-inch Slotted Wall Test Section (SWTS)

    NASA Technical Reports Server (NTRS)

    Gumbert, Clyde R.; Green, Lawrence L.; Newman, Perry A.

    1989-01-01

    From the time that wind tunnel wall interference was recognized to be significant, researchers have been developing methods to alleviate or account for it. Despite the best effort so far, it appears that no method is available which completely eliminates the effects due to the wind tunnel walls. This report discusses procedures developed for slotted wall and adaptive wall test sections of the Langley 0.3-m Transonic Cryogenic Tunnel (TCT) to assess and correct for the residual interference by methods consistent with the transonic nature of the tests.

  16. Abstracting of suspected illegal land use in urban areas using case-based classification of remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Fulong; Wang, Chao; Yang, Chengyun; Zhang, Hong; Wu, Fan; Lin, Wenjuan; Zhang, Bo

    2008-11-01

    This paper proposed a method that uses a case-based classification of remote sensing images and applied this method to abstract the information of suspected illegal land use in urban areas. Because of the discrete cases for imagery classification, the proposed method dealt with the oscillation of spectrum or backscatter within the same land use category, and it not only overcame the deficiency of maximum likelihood classification (the prior probability of land use could not be obtained) but also inherited the advantages of the knowledge-based classification system, such as artificial intelligence and automatic characteristics. Consequently, the proposed method could do the classifying better. Then the researchers used the object-oriented technique for shadow removal in highly dense city zones. With multi-temporal SPOT 5 images whose resolution was 2.5×2.5 meters, the researchers found that the method can abstract suspected illegal land use information in urban areas using post-classification comparison technique.

  17. Ensemble of sparse classifiers for high-dimensional biological data.

    PubMed

    Kim, Sunghan; Scalzo, Fabien; Telesca, Donatello; Hu, Xiao

    2015-01-01

    Biological data are often high in dimension while the number of samples is small. In such cases, the performance of classification can be improved by reducing the dimension of data, which is referred to as feature selection. Recently, a novel feature selection method has been proposed utilising the sparsity of high-dimensional biological data where a small subset of features accounts for most variance of the dataset. In this study we propose a new classification method for high-dimensional biological data, which performs both feature selection and classification within a single framework. Our proposed method utilises a sparse linear solution technique and the bootstrap aggregating algorithm. We tested its performance on four public mass spectrometry cancer datasets along with two other conventional classification techniques such as Support Vector Machines and Adaptive Boosting. The results demonstrate that our proposed method performs more accurate classification across various cancer datasets than those conventional classification techniques.

  18. Objected-oriented remote sensing image classification method based on geographic ontology model

    NASA Astrophysics Data System (ADS)

    Chu, Z.; Liu, Z. J.; Gu, H. Y.

    2016-11-01

    Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application of the knowledge to the field of remote sensing image classification, which provides an effective way for objected-oriented remote sensing image classification in the future.

  19. Public access defibrillation: suppression of 16.7 Hz interference generated by the power supply of the railway systems.

    PubMed

    Christov, Ivaylo I; Iliev, Georgi L

    2005-03-15

    A specific problem using the public access defibrillators (PADs) arises at the railway stations. Some countries as Germany, Austria, Switzerland, Norway and Sweden are using AC railroad net power-supply system with rated 16.7 Hz frequency modulated from 15.69 Hz to 17.36 Hz. The power supply frequency contaminates the electrocardiogram (ECG). It is difficult to be suppressed or eliminated due to the fact that it considerably overlaps the frequency spectra of the ECG. The interference impedes the automated decision of the PADs whether a patient should be (or should not be) shocked. The aim of this study is the suppression of the 16.7 Hz interference generated by the power supply of the railway systems. Software solution using adaptive filtering method was proposed for 16.7 Hz interference suppression. The optimal performance of the filter is achieved, embedding a reference channel in the PADs to record the interference. The method was tested with ECGs from AHA database. The method was tested with patients of normal sinus rhythms, symptoms of tachycardia and ventricular fibrillation. Simulated interference with frequency modulation from 15.69 Hz to 17.36 Hz changing at a rate of 2% per second was added to the ECGs, and then processed by the suggested adaptive filtering. The method totally suppresses the noise with no visible distortions of the original signals. The proposed adaptive filter for noise suppression generated by the power supply of the railway systems has a simple structure requiring a low level of computational resources, but a good reference signal as well.

  20. Excitonic quantum interference in a quantum dot chain with rings.

    PubMed

    Hong, Suc-Kyoung; Nam, Seog Woo; Yeon, Kyu-Hwang

    2008-04-16

    We demonstrate excitonic quantum interference in a closely spaced quantum dot chain with nanorings. In the resonant dipole-dipole interaction model with direct diagonalization method, we have found a peculiar feature that the excitation of specified quantum dots in the chain is completely inhibited, depending on the orientational configuration of the transition dipole moments and specified initial preparation of the excitation. In practice, these excited states facilitating quantum interference can provide a conceptual basis for quantum interference devices of excitonic hopping.

  1. Interference detection and correction applied to incoherent-scatter radar power spectrum measurement

    NASA Technical Reports Server (NTRS)

    Ying, W. P.; Mathews, J. D.; Rastogi, P. K.

    1986-01-01

    A median filter based interference detection and correction technique is evaluated and the method applied to the Arecibo incoherent scatter radar D-region ionospheric power spectrum is discussed. The method can be extended to other kinds of data when the statistics involved in the process are still valid.

  2. A chemometric method for correcting FTIR spectra of biomaterials for interference from water in KBr discs

    USDA-ARS?s Scientific Manuscript database

    FTIR analysis of solid biomaterials by the familiar KBr disc technique is very often frustrated by water interference in the important protein (amide I) and carbohydrate (hydroxyl) regions of their spectra. A method was therefore devised that overcomes the difficulty and measures FTIR spectra of so...

  3. Multi-label literature classification based on the Gene Ontology graph.

    PubMed

    Jin, Bo; Muller, Brian; Zhai, Chengxiang; Lu, Xinghua

    2008-12-08

    The Gene Ontology is a controlled vocabulary for representing knowledge related to genes and proteins in a computable form. The current effort of manually annotating proteins with the Gene Ontology is outpaced by the rate of accumulation of biomedical knowledge in literature, which urges the development of text mining approaches to facilitate the process by automatically extracting the Gene Ontology annotation from literature. The task is usually cast as a text classification problem, and contemporary methods are confronted with unbalanced training data and the difficulties associated with multi-label classification. In this research, we investigated the methods of enhancing automatic multi-label classification of biomedical literature by utilizing the structure of the Gene Ontology graph. We have studied three graph-based multi-label classification algorithms, including a novel stochastic algorithm and two top-down hierarchical classification methods for multi-label literature classification. We systematically evaluated and compared these graph-based classification algorithms to a conventional flat multi-label algorithm. The results indicate that, through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods can significantly improve predictions of the Gene Ontology terms implied by the analyzed text. Furthermore, the graph-based multi-label classifiers are capable of suggesting Gene Ontology annotations (to curators) that are closely related to the true annotations even if they fail to predict the true ones directly. A software package implementing the studied algorithms is available for the research community. Through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods have better potential than the conventional flat multi-label classification approach to facilitate protein annotation based on the literature.

  4. Classification of Parkinson's disease utilizing multi-edit nearest-neighbor and ensemble learning algorithms with speech samples.

    PubMed

    Zhang, He-Hua; Yang, Liuyang; Liu, Yuchuan; Wang, Pin; Yin, Jun; Li, Yongming; Qiu, Mingguo; Zhu, Xueru; Yan, Fang

    2016-11-16

    The use of speech based data in the classification of Parkinson disease (PD) has been shown to provide an effect, non-invasive mode of classification in recent years. Thus, there has been an increased interest in speech pattern analysis methods applicable to Parkinsonism for building predictive tele-diagnosis and tele-monitoring models. One of the obstacles in optimizing classifications is to reduce noise within the collected speech samples, thus ensuring better classification accuracy and stability. While the currently used methods are effect, the ability to invoke instance selection has been seldomly examined. In this study, a PD classification algorithm was proposed and examined that combines a multi-edit-nearest-neighbor (MENN) algorithm and an ensemble learning algorithm. First, the MENN algorithm is applied for selecting optimal training speech samples iteratively, thereby obtaining samples with high separability. Next, an ensemble learning algorithm, random forest (RF) or decorrelated neural network ensembles (DNNE), is used to generate trained samples from the collected training samples. Lastly, the trained ensemble learning algorithms are applied to the test samples for PD classification. This proposed method was examined using a more recently deposited public datasets and compared against other currently used algorithms for validation. Experimental results showed that the proposed algorithm obtained the highest degree of improved classification accuracy (29.44%) compared with the other algorithm that was examined. Furthermore, the MENN algorithm alone was found to improve classification accuracy by as much as 45.72%. Moreover, the proposed algorithm was found to exhibit a higher stability, particularly when combining the MENN and RF algorithms. This study showed that the proposed method could improve PD classification when using speech data and can be applied to future studies seeking to improve PD classification methods.

  5. Classification of Clouds in Satellite Imagery Using Adaptive Fuzzy Sparse Representation

    PubMed Central

    Jin, Wei; Gong, Fei; Zeng, Xingbin; Fu, Randi

    2016-01-01

    Automatic cloud detection and classification using satellite cloud imagery have various meteorological applications such as weather forecasting and climate monitoring. Cloud pattern analysis is one of the research hotspots recently. Since satellites sense the clouds remotely from space, and different cloud types often overlap and convert into each other, there must be some fuzziness and uncertainty in satellite cloud imagery. Satellite observation is susceptible to noises, while traditional cloud classification methods are sensitive to noises and outliers; it is hard for traditional cloud classification methods to achieve reliable results. To deal with these problems, a satellite cloud classification method using adaptive fuzzy sparse representation-based classification (AFSRC) is proposed. Firstly, by defining adaptive parameters related to attenuation rate and critical membership, an improved fuzzy membership is introduced to accommodate the fuzziness and uncertainty of satellite cloud imagery; secondly, by effective combination of the improved fuzzy membership function and sparse representation-based classification (SRC), atoms in training dictionary are optimized; finally, an adaptive fuzzy sparse representation classifier for cloud classification is proposed. Experiment results on FY-2G satellite cloud image show that, the proposed method not only improves the accuracy of cloud classification, but also has strong stability and adaptability with high computational efficiency. PMID:27999261

  6. High bandwidth all-optical 3×3 switch based on multimode interference structures

    NASA Astrophysics Data System (ADS)

    Le, Duy-Tien; Truong, Cao-Dung; Le, Trung-Thanh

    2017-03-01

    A high bandwidth all-optical 3×3 switch based on general interference multimode interference (GI-MMI) structure is proposed in this study. Two 3×3 multimode interference couplers are cascaded to realize an all-optical switch operating at both wavelengths of 1550 nm and 1310 nm. Two nonlinear directional couplers at two outer-arms of the structure are used as all-optical phase shifters to achieve all switching states and to control the switching states. Analytical expressions for switching operation using the transfer matrix method are presented. The beam propagation method (BPM) is used to design and optimize the whole structure. The optimal design of the all-optical phase shifters and 3×3 MMI couplers are carried out to reduce the switching power and loss.

  7. An experimental study of wall adaptation and interference assessment using Cauchy integral formula

    NASA Technical Reports Server (NTRS)

    Murthy, A. V.

    1991-01-01

    This paper summarizes the results of an experimental study of combined wall adaptation and residual interference assessment using the Cauchy integral formula. The experiments were conducted on a supercritical airfoil model in the Langley 0.3-m Transonic Cryogenic Tunnel solid flexible wall test section. The ratio of model chord to test section height was about 0.7. The method worked satisfactorily in reducing the blockage interference and demonstrated the primary requirement for correcting for the blockage effects at high model incidences to correctly determine high lift characteristics. The studies show that the method has potential for reducing the residual interference to considerably low levels. However, corrections to blockage and upwash velocities gradients may still be required for the final adapted wall shapes.

  8. Evaluation of different classification methods for the diagnosis of schizophrenia based on functional near-infrared spectroscopy.

    PubMed

    Li, Zhaohua; Wang, Yuduo; Quan, Wenxiang; Wu, Tongning; Lv, Bin

    2015-02-15

    Based on near-infrared spectroscopy (NIRS), recent converging evidence has been observed that patients with schizophrenia exhibit abnormal functional activities in the prefrontal cortex during a verbal fluency task (VFT). Therefore, some studies have attempted to employ NIRS measurements to differentiate schizophrenia patients from healthy controls with different classification methods. However, no systematic evaluation was conducted to compare their respective classification performances on the same study population. In this study, we evaluated the classification performance of four classification methods (including linear discriminant analysis, k-nearest neighbors, Gaussian process classifier, and support vector machines) on an NIRS-aided schizophrenia diagnosis. We recruited a large sample of 120 schizophrenia patients and 120 healthy controls and measured the hemoglobin response in the prefrontal cortex during the VFT using a multichannel NIRS system. Features for classification were extracted from three types of NIRS data in each channel. We subsequently performed a principal component analysis (PCA) for feature selection prior to comparison of the different classification methods. We achieved a maximum accuracy of 85.83% and an overall mean accuracy of 83.37% using a PCA-based feature selection on oxygenated hemoglobin signals and support vector machine classifier. This is the first comprehensive evaluation of different classification methods for the diagnosis of schizophrenia based on different types of NIRS signals. Our results suggested that, using the appropriate classification method, NIRS has the potential capacity to be an effective objective biomarker for the diagnosis of schizophrenia. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Resource Isolation Method for Program’S Performance on CMP

    NASA Astrophysics Data System (ADS)

    Guan, Ti; Liu, Chunxiu; Xu, Zheng; Li, Huicong; Ma, Qiang

    2017-10-01

    Data center and cloud computing are more popular, which make more benefits for customers and the providers. However, in data center or clusters, commonly there is more than one program running on one server, but programs may interference with each other. The interference may take a little effect, however, the interference may cause serious drop down of performance. In order to avoid the performance interference problem, the mechanism of isolate resource for different programs is a better choice. In this paper we propose a light cost resource isolation method to improve program’s performance. This method uses Cgroups to set the dedicated CPU and memory resource for a program, aiming at to guarantee the program’s performance. There are three engines to realize this method: Program Monitor Engine top program’s resource usage of CPU and memory and transfer the information to Resource Assignment Engine; Resource Assignment Engine calculates the size of CPU and memory resource should be applied for the program; Cgroups Control Engine divide resource by Linux tool Cgroups, and drag program in control group for execution. The experiment result show that making use of the resource isolation method proposed by our paper, program’s performance can be improved.

  10. Adaptive antenna arrays for weak interfering signals

    NASA Technical Reports Server (NTRS)

    Gupta, I. J.

    1985-01-01

    The interference protection provided by adaptive antenna arrays to an Earth station or satellite receive antenna system is studied. The case where the interference is caused by the transmission from adjacent satellites or Earth stations whose signals inadverently enter the receiving system and interfere with the communication link is considered. Thus, the interfering signals are very weak. To increase the interference suppression, one can either decrease the thermal noise in the feedback loops or increase the gain of the auxiliary antennas in the interfering signal direction. Both methods are examined. It is shown that one may have to reduce the noise correlation to impractically low values and if directive auxiliary antennas are used, the auxiliary antenna size may have to be too large. One can, however, combine the two methods to achieve the specified interference suppression with reasonable requirements of noise decorrelation and auxiliary antenna size. Effects of the errors in the steering vector on the adaptive array performance are studied.

  11. Interference and deception detection technology of satellite navigation based on deep learning

    NASA Astrophysics Data System (ADS)

    Chen, Weiyi; Deng, Pingke; Qu, Yi; Zhang, Xiaoguang; Li, Yaping

    2017-10-01

    Satellite navigation system plays an important role in people's daily life and war. The strategic position of satellite navigation system is prominent, so it is very important to ensure that the satellite navigation system is not disturbed or destroyed. It is a critical means to detect the jamming signal to avoid the accident in a navigation system. At present, the detection technology of jamming signal in satellite navigation system is not intelligent , mainly relying on artificial decision and experience. For this issue, the paper proposes a method based on deep learning to monitor the interference source in a satellite navigation. By training the interference signal data, and extracting the features of the interference signal, the detection sys tem model is constructed. The simulation results show that, the detection accuracy of our detection system can reach nearly 70%. The method in our paper provides a new idea for the research on intelligent detection of interference and deception signal in a satellite navigation system.

  12. Predicting Pharmacodynamic Drug-Drug Interactions through Signaling Propagation Interference on Protein-Protein Interaction Networks.

    PubMed

    Park, Kyunghyun; Kim, Docyong; Ha, Suhyun; Lee, Doheon

    2015-01-01

    As pharmacodynamic drug-drug interactions (PD DDIs) could lead to severe adverse effects in patients, it is important to identify potential PD DDIs in drug development. The signaling starting from drug targets is propagated through protein-protein interaction (PPI) networks. PD DDIs could occur by close interference on the same targets or within the same pathways as well as distant interference through cross-talking pathways. However, most of the previous approaches have considered only close interference by measuring distances between drug targets or comparing target neighbors. We have applied a random walk with restart algorithm to simulate signaling propagation from drug targets in order to capture the possibility of their distant interference. Cross validation with DrugBank and Kyoto Encyclopedia of Genes and Genomes DRUG shows that the proposed method outperforms the previous methods significantly. We also provide a web service with which PD DDIs for drug pairs can be analyzed at http://biosoft.kaist.ac.kr/targetrw.

  13. Removal of power line interference of space bearing vibration signal based on the morphological filter and blind source separation

    NASA Astrophysics Data System (ADS)

    Dong, Shaojiang; Sun, Dihua; Xu, Xiangyang; Tang, Baoping

    2017-06-01

    Aiming at the problem that it is difficult to extract the feature information from the space bearing vibration signal because of different noise, for example the running trend information, high-frequency noise and especially the existence of lot of power line interference (50Hz) and its octave ingredients of the running space simulated equipment in the ground. This article proposed a combination method to eliminate them. Firstly, the EMD is used to remove the running trend item information of the signal, the running trend that affect the signal processing accuracy is eliminated. Then the morphological filter is used to eliminate high-frequency noise. Finally, the components and characteristics of the power line interference are researched, based on the characteristics of the interference, the revised blind source separation model is used to remove the power line interferences. Through analysis of simulation and practical application, results suggest that the proposed method can effectively eliminate those noise.

  14. A Real-Time Infrared Ultra-Spectral Signature Classification Method via Spatial Pyramid Matching

    PubMed Central

    Mei, Xiaoguang; Ma, Yong; Li, Chang; Fan, Fan; Huang, Jun; Ma, Jiayi

    2015-01-01

    The state-of-the-art ultra-spectral sensor technology brings new hope for high precision applications due to its high spectral resolution. However, it also comes with new challenges, such as the high data dimension and noise problems. In this paper, we propose a real-time method for infrared ultra-spectral signature classification via spatial pyramid matching (SPM), which includes two aspects. First, we introduce an infrared ultra-spectral signature similarity measure method via SPM, which is the foundation of the matching-based classification method. Second, we propose the classification method with reference spectral libraries, which utilizes the SPM-based similarity for the real-time infrared ultra-spectral signature classification with robustness performance. Specifically, instead of matching with each spectrum in the spectral library, our method is based on feature matching, which includes a feature library-generating phase. We calculate the SPM-based similarity between the feature of the spectrum and that of each spectrum of the reference feature library, then take the class index of the corresponding spectrum having the maximum similarity as the final result. Experimental comparisons on two publicly-available datasets demonstrate that the proposed method effectively improves the real-time classification performance and robustness to noise. PMID:26205263

  15. Ground-based cloud classification by learning stable local binary patterns

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Shi, Cunzhao; Wang, Chunheng; Xiao, Baihua

    2018-07-01

    Feature selection and extraction is the first step in implementing pattern classification. The same is true for ground-based cloud classification. Histogram features based on local binary patterns (LBPs) are widely used to classify texture images. However, the conventional uniform LBP approach cannot capture all the dominant patterns in cloud texture images, thereby resulting in low classification performance. In this study, a robust feature extraction method by learning stable LBPs is proposed based on the averaged ranks of the occurrence frequencies of all rotation invariant patterns defined in the LBPs of cloud images. The proposed method is validated with a ground-based cloud classification database comprising five cloud types. Experimental results demonstrate that the proposed method achieves significantly higher classification accuracy than the uniform LBP, local texture patterns (LTP), dominant LBP (DLBP), completed LBP (CLTP) and salient LBP (SaLBP) methods in this cloud image database and under different noise conditions. And the performance of the proposed method is comparable with that of the popular deep convolutional neural network (DCNN) method, but with less computation complexity. Furthermore, the proposed method also achieves superior performance on an independent test data set.

  16. Land Cover Classification in a Complex Urban-Rural Landscape with Quickbird Imagery

    PubMed Central

    Moran, Emilio Federico.

    2010-01-01

    High spatial resolution images have been increasingly used for urban land use/cover classification, but the high spectral variation within the same land cover, the spectral confusion among different land covers, and the shadow problem often lead to poor classification performance based on the traditional per-pixel spectral-based classification methods. This paper explores approaches to improve urban land cover classification with Quickbird imagery. Traditional per-pixel spectral-based supervised classification, incorporation of textural images and multispectral images, spectral-spatial classifier, and segmentation-based classification are examined in a relatively new developing urban landscape, Lucas do Rio Verde in Mato Grosso State, Brazil. This research shows that use of spatial information during the image classification procedure, either through the integrated use of textural and spectral images or through the use of segmentation-based classification method, can significantly improve land cover classification performance. PMID:21643433

  17. Amplification of interference color by using liquid crystal for protein detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Qingdi; Yang, Kun-Lin, E-mail: cheyk@nus.edu.sg

    Micrometer-sized, periodic protein lines printed on a solid surface cause interference color which is invisible to the naked eye. However, the interference color can be amplified by using a thin layer of liquid crystal (LC) covered on the surface to form a phase diffraction grating. Strong interference color can thus be observed under ambient light. By using the LC-amplified interference color, we demonstrate naked-eye detection of a model protein—immunoglobulin G (IgG). Limit of detection can reach 20 μg/ml of IgG without using any instrumentation. This detection method is potentially useful for the development of low-cost and portable biosensors.

  18. Aerodynamic interference effects on tilting proprotor aircraft. [using the Green function method

    NASA Technical Reports Server (NTRS)

    Soohoo, P.; Morino, L.; Noll, R. B.; Ham, N. D.

    1977-01-01

    The Green's function method was used to study tilting proprotor aircraft aerodynamics with particular application to the problem of the mutual interference of the wing-fuselage-tail-rotor wake configuration. While the formulation is valid for fully unsteady rotor aerodynamics, attention was directed to steady state aerodynamics, which was achieved by replacing the rotor with the actuator disk approximation. The use of an actuator disk analysis introduced a mathematical singularity into the formulation; this problem was studied and resolved. The pressure distribution, lift, and pitching moment were obtained for an XV-15 wing-fuselage-tail rotor configuration at various flight conditions. For the flight configurations explored, the effects of the rotor wake interference on the XV-15 tilt rotor aircraft yielded a reduction in the total lift and an increase in the nose-down pitching moment. This method provides an analytical capability that is simple to apply and can be used to investigate fuselage-tail rotor wake interference as well as to explore other rotor design problem areas.

  19. Methods for assessing wall interference in the 2- by 2-foot adaptive-wall wind tunnel

    NASA Technical Reports Server (NTRS)

    Schairer, E. T.

    1986-01-01

    Discussed are two methods for assessing two-dimensional wall interference in the adaptive-wall test section of the NASA Ames 2 x 2-Foot Transonic Wind Tunnel: (1) a method for predicting free-air conditions near the walls of the test section (adaptive-wall methods); and (2) a method for estimating wall-induced velocities near the model (correction methods), both of which methods are based on measurements of either one or two components of flow velocity near the walls of the test section. Each method is demonstrated using simulated wind tunnel data and is compared with other methods of the same type. The two-component adaptive-wall and correction methods were found to be preferable to the corresponding one-component methods because: (1) they are more sensitive to, and give a more complete description of, wall interference; (2) they require measurements at fewer locations; (3) they can be used to establish free-stream conditions; and (4) they are independent of a description of the model and constants of integration.

  20. Suppression of AC railway power-line interference in ECG signals recorded by public access defibrillators.

    PubMed

    Dotsinsky, Ivan

    2005-11-26

    Public access defibrillators (PADs) are now available for more efficient and rapid treatment of out-of-hospital sudden cardiac arrest. PADs are used normally by untrained people on the streets and in sports centers, airports, and other public areas. Therefore, automated detection of ventricular fibrillation, or its exclusion, is of high importance. A special case exists at railway stations, where electric power-line frequency interference is significant. Many countries, especially in Europe, use 16.7 Hz AC power, which introduces high level frequency-varying interference that may compromise fibrillation detection. Moving signal averaging is often used for 50/60 Hz interference suppression if its effect on the ECG spectrum has little importance (no morphological analysis is performed). This approach may be also applied to the railway situation, if the interference frequency is continuously detected so as to synchronize the analog-to-digital conversion (ADC) for introducing variable inter-sample intervals. A better solution consists of rated ADC, software frequency measuring, internal irregular re-sampling according to the interference frequency, and a moving average over a constant sample number, followed by regular back re-sampling. The proposed method leads to a total railway interference cancellation, together with suppression of inherent noise, while the peak amplitudes of some sharp complexes are reduced. This reduction has negligible effect on accurate fibrillation detection. The method is developed in the MATLAB environment and represents a useful tool for real time railway interference suppression.

  1. Research on Remote Sensing Image Classification Based on Feature Level Fusion

    NASA Astrophysics Data System (ADS)

    Yuan, L.; Zhu, G.

    2018-04-01

    Remote sensing image classification, as an important direction of remote sensing image processing and application, has been widely studied. However, in the process of existing classification algorithms, there still exists the phenomenon of misclassification and missing points, which leads to the final classification accuracy is not high. In this paper, we selected Sentinel-1A and Landsat8 OLI images as data sources, and propose a classification method based on feature level fusion. Compare three kind of feature level fusion algorithms (i.e., Gram-Schmidt spectral sharpening, Principal Component Analysis transform and Brovey transform), and then select the best fused image for the classification experimental. In the classification process, we choose four kinds of image classification algorithms (i.e. Minimum distance, Mahalanobis distance, Support Vector Machine and ISODATA) to do contrast experiment. We use overall classification precision and Kappa coefficient as the classification accuracy evaluation criteria, and the four classification results of fused image are analysed. The experimental results show that the fusion effect of Gram-Schmidt spectral sharpening is better than other methods. In four kinds of classification algorithms, the fused image has the best applicability to Support Vector Machine classification, the overall classification precision is 94.01 % and the Kappa coefficients is 0.91. The fused image with Sentinel-1A and Landsat8 OLI is not only have more spatial information and spectral texture characteristics, but also enhances the distinguishing features of the images. The proposed method is beneficial to improve the accuracy and stability of remote sensing image classification.

  2. Methods for automatic detection of artifacts in microelectrode recordings.

    PubMed

    Bakštein, Eduard; Sieger, Tomáš; Wild, Jiří; Novák, Daniel; Schneider, Jakub; Vostatek, Pavel; Urgošík, Dušan; Jech, Robert

    2017-10-01

    Extracellular microelectrode recording (MER) is a prominent technique for studies of extracellular single-unit neuronal activity. In order to achieve robust results in more complex analysis pipelines, it is necessary to have high quality input data with a low amount of artifacts. We show that noise (mainly electromagnetic interference and motion artifacts) may affect more than 25% of the recording length in a clinical MER database. We present several methods for automatic detection of noise in MER signals, based on (i) unsupervised detection of stationary segments, (ii) large peaks in the power spectral density, and (iii) a classifier based on multiple time- and frequency-domain features. We evaluate the proposed methods on a manually annotated database of 5735 ten-second MER signals from 58 Parkinson's disease patients. The existing methods for artifact detection in single-channel MER that have been rigorously tested, are based on unsupervised change-point detection. We show on an extensive real MER database that the presented techniques are better suited for the task of artifact identification and achieve much better results. The best-performing classifiers (bagging and decision tree) achieved artifact classification accuracy of up to 89% on an unseen test set and outperformed the unsupervised techniques by 5-10%. This was close to the level of agreement among raters using manual annotation (93.5%). We conclude that the proposed methods are suitable for automatic MER denoising and may help in the efficient elimination of undesirable signal artifacts. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. A multiple-point spatially weighted k-NN method for object-based classification

    NASA Astrophysics Data System (ADS)

    Tang, Yunwei; Jing, Linhai; Li, Hui; Atkinson, Peter M.

    2016-10-01

    Object-based classification, commonly referred to as object-based image analysis (OBIA), is now commonly regarded as able to produce more appealing classification maps, often of greater accuracy, than pixel-based classification and its application is now widespread. Therefore, improvement of OBIA using spatial techniques is of great interest. In this paper, multiple-point statistics (MPS) is proposed for object-based classification enhancement in the form of a new multiple-point k-nearest neighbour (k-NN) classification method (MPk-NN). The proposed method first utilises a training image derived from a pre-classified map to characterise the spatial correlation between multiple points of land cover classes. The MPS borrows spatial structures from other parts of the training image, and then incorporates this spatial information, in the form of multiple-point probabilities, into the k-NN classifier. Two satellite sensor images with a fine spatial resolution were selected to evaluate the new method. One is an IKONOS image of the Beijing urban area and the other is a WorldView-2 image of the Wolong mountainous area, in China. The images were object-based classified using the MPk-NN method and several alternatives, including the k-NN, the geostatistically weighted k-NN, the Bayesian method, the decision tree classifier (DTC), and the support vector machine classifier (SVM). It was demonstrated that the new spatial weighting based on MPS can achieve greater classification accuracy relative to the alternatives and it is, thus, recommended as appropriate for object-based classification.

  4. Feature extraction based on extended multi-attribute profiles and sparse autoencoder for remote sensing image classification

    NASA Astrophysics Data System (ADS)

    Teffahi, Hanane; Yao, Hongxun; Belabid, Nasreddine; Chaib, Souleyman

    2018-02-01

    The satellite images with very high spatial resolution have been recently widely used in image classification topic as it has become challenging task in remote sensing field. Due to a number of limitations such as the redundancy of features and the high dimensionality of the data, different classification methods have been proposed for remote sensing images classification particularly the methods using feature extraction techniques. This paper propose a simple efficient method exploiting the capability of extended multi-attribute profiles (EMAP) with sparse autoencoder (SAE) for remote sensing image classification. The proposed method is used to classify various remote sensing datasets including hyperspectral and multispectral images by extracting spatial and spectral features based on the combination of EMAP and SAE by linking them to kernel support vector machine (SVM) for classification. Experiments on new hyperspectral image "Huston data" and multispectral image "Washington DC data" shows that this new scheme can achieve better performance of feature learning than the primitive features, traditional classifiers and ordinary autoencoder and has huge potential to achieve higher accuracy for classification in short running time.

  5. Mechanics of the tapered interference fit in dental implants.

    PubMed

    Bozkaya, Dinçer; Müftü, Sinan

    2003-11-01

    In evaluation of the long-term success of a dental implant, the reliability and the stability of the implant-abutment interface plays a great role. Tapered interference fits provide a reliable connection method between the abutment and the implant. In this work, the mechanics of the tapered interference fits were analyzed using a closed-form formula and the finite element (FE) method. An analytical solution, which is used to predict the contact pressure in a straight interference, was modified to predict the contact pressure in the tapered implant-abutment interface. Elastic-plastic FE analysis was used to simulate the implant and abutment material behavior. The validity and the applicability of the analytical solution were investigated by comparisons with the FE model for a range of problem parameters. It was shown that the analytical solution could be used to determine the pull-out force and loosening-torque with 5-10% error. Detailed analysis of the stress distribution due to tapered interference fit, in a commercially available, abutment-implant system was carried out. This analysis shows that plastic deformation in the implant limits the increase in the pull-out force that would have been otherwise predicted by higher interference values.

  6. Constrained Bayesian Active Learning of Interference Channels in Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    Tsakmalis, Anestis; Chatzinotas, Symeon; Ottersten, Bjorn

    2018-02-01

    In this paper, a sequential probing method for interference constraint learning is proposed to allow a centralized Cognitive Radio Network (CRN) accessing the frequency band of a Primary User (PU) in an underlay cognitive scenario with a designed PU protection specification. The main idea is that the CRN probes the PU and subsequently eavesdrops the reverse PU link to acquire the binary ACK/NACK packet. This feedback indicates whether the probing-induced interference is harmful or not and can be used to learn the PU interference constraint. The cognitive part of this sequential probing process is the selection of the power levels of the Secondary Users (SUs) which aims to learn the PU interference constraint with a minimum number of probing attempts while setting a limit on the number of harmful probing-induced interference events or equivalently of NACK packet observations over a time window. This constrained design problem is studied within the Active Learning (AL) framework and an optimal solution is derived and implemented with a sophisticated, accurate and fast Bayesian Learning method, the Expectation Propagation (EP). The performance of this solution is also demonstrated through numerical simulations and compared with modified versions of AL techniques we developed in earlier work.

  7. Voting strategy for artifact reduction in digital breast tomosynthesis.

    PubMed

    Wu, Tao; Moore, Richard H; Kopans, Daniel B

    2006-07-01

    Artifacts are observed in digital breast tomosynthesis (DBT) reconstructions due to the small number of projections and the narrow angular range that are typically employed in tomosynthesis imaging. In this work, we investigate the reconstruction artifacts that are caused by high-attenuation features in breast and develop several artifact reduction methods based on a "voting strategy." The voting strategy identifies the projection(s) that would introduce artifacts to a voxel and rejects the projection(s) when reconstructing the voxel. Four approaches to the voting strategy were compared, including projection segmentation, maximum contribution deduction, one-step classification, and iterative classification. The projection segmentation method, based on segmentation of high-attenuation features from the projections, effectively reduces artifacts caused by metal and large calcifications that can be reliably detected and segmented from projections. The other three methods are based on the observation that contributions from artifact-inducing projections have higher value than those from normal projections. These methods attempt to identify the projection(s) that would cause artifacts by comparing contributions from different projections. Among the three methods, the iterative classification method provides the best artifact reduction; however, it can generate many false positive classifications that degrade the image quality. The maximum contribution deduction method and one-step classification method both reduce artifacts well from small calcifications, although the performance of artifact reduction is slightly better with the one-step classification. The combination of one-step classification and projection segmentation removes artifacts from both large and small calcifications.

  8. Hybrid Optimization of Object-Based Classification in High-Resolution Images Using Continous ANT Colony Algorithm with Emphasis on Building Detection

    NASA Astrophysics Data System (ADS)

    Tamimi, E.; Ebadi, H.; Kiani, A.

    2017-09-01

    Automatic building detection from High Spatial Resolution (HSR) images is one of the most important issues in Remote Sensing (RS). Due to the limited number of spectral bands in HSR images, using other features will lead to improve accuracy. By adding these features, the presence probability of dependent features will be increased, which leads to accuracy reduction. In addition, some parameters should be determined in Support Vector Machine (SVM) classification. Therefore, it is necessary to simultaneously determine classification parameters and select independent features according to image type. Optimization algorithm is an efficient method to solve this problem. On the other hand, pixel-based classification faces several challenges such as producing salt-paper results and high computational time in high dimensional data. Hence, in this paper, a novel method is proposed to optimize object-based SVM classification by applying continuous Ant Colony Optimization (ACO) algorithm. The advantages of the proposed method are relatively high automation level, independency of image scene and type, post processing reduction for building edge reconstruction and accuracy improvement. The proposed method was evaluated by pixel-based SVM and Random Forest (RF) classification in terms of accuracy. In comparison with optimized pixel-based SVM classification, the results showed that the proposed method improved quality factor and overall accuracy by 17% and 10%, respectively. Also, in the proposed method, Kappa coefficient was improved by 6% rather than RF classification. Time processing of the proposed method was relatively low because of unit of image analysis (image object). These showed the superiority of the proposed method in terms of time and accuracy.

  9. Recursive heuristic classification

    NASA Technical Reports Server (NTRS)

    Wilkins, David C.

    1994-01-01

    The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.

  10. Semi-Supervised Marginal Fisher Analysis for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Huang, H.; Liu, J.; Pan, Y.

    2012-07-01

    The problem of learning with both labeled and unlabeled examples arises frequently in Hyperspectral image (HSI) classification. While marginal Fisher analysis is a supervised method, which cannot be directly applied for Semi-supervised classification. In this paper, we proposed a novel method, called semi-supervised marginal Fisher analysis (SSMFA), to process HSI of natural scenes, which uses a combination of semi-supervised learning and manifold learning. In SSMFA, a new difference-based optimization objective function with unlabeled samples has been designed. SSMFA preserves the manifold structure of labeled and unlabeled samples in addition to separating labeled samples in different classes from each other. The semi-supervised method has an analytic form of the globally optimal solution, and it can be computed based on eigen decomposition. Classification experiments with a challenging HSI task demonstrate that this method outperforms current state-of-the-art HSI-classification methods.

  11. An Ensemble Multilabel Classification for Disease Risk Prediction

    PubMed Central

    Liu, Wei; Zhao, Hongling; Zhang, Chaoyang

    2017-01-01

    It is important to identify and prevent disease risk as early as possible through regular physical examinations. We formulate the disease risk prediction into a multilabel classification problem. A novel Ensemble Label Power-set Pruned datasets Joint Decomposition (ELPPJD) method is proposed in this work. First, we transform the multilabel classification into a multiclass classification. Then, we propose the pruned datasets and joint decomposition methods to deal with the imbalance learning problem. Two strategies size balanced (SB) and label similarity (LS) are designed to decompose the training dataset. In the experiments, the dataset is from the real physical examination records. We contrast the performance of the ELPPJD method with two different decomposition strategies. Moreover, the comparison between ELPPJD and the classic multilabel classification methods RAkEL and HOMER is carried out. The experimental results show that the ELPPJD method with label similarity strategy has outstanding performance. PMID:29065647

  12. The Research on Tunnel Surrounding Rock Classification Based on Geological Radar and Probability Theory

    NASA Astrophysics Data System (ADS)

    Xiao Yong, Zhao; Xin, Ji Yong; Shuang Ying, Zuo

    2018-03-01

    In order to effectively classify the surrounding rock types of tunnels, a multi-factor tunnel surrounding rock classification method based on GPR and probability theory is proposed. Geological radar was used to identify the geology of the surrounding rock in front of the face and to evaluate the quality of the rock face. According to the previous survey data, the rock uniaxial compressive strength, integrity index, fissure and groundwater were selected for classification. The related theories combine them into a multi-factor classification method, and divide the surrounding rocks according to the great probability. Using this method to classify the surrounding rock of the Ma’anshan tunnel, the surrounding rock types obtained are basically the same as those of the actual surrounding rock, which proves that this method is a simple, efficient and practical rock classification method, which can be used for tunnel construction.

  13. Workshop on Algorithms for Time-Series Analysis

    NASA Astrophysics Data System (ADS)

    Protopapas, Pavlos

    2012-04-01

    abstract-type="normal">SummaryThis Workshop covered the four major subjects listed below in two 90-minute sessions. Each talk or tutorial allowed questions, and concluded with a discussion. Classification: Automatic classification using machine-learning methods is becoming a standard in surveys that generate large datasets. Ashish Mahabal (Caltech) reviewed various methods, and presented examples of several applications. Time-Series Modelling: Suzanne Aigrain (Oxford University) discussed autoregressive models and multivariate approaches such as Gaussian Processes. Meta-classification/mixture of expert models: Karim Pichara (Pontificia Universidad Católica, Chile) described the substantial promise which machine-learning classification methods are now showing in automatic classification, and discussed how the various methods can be combined together. Event Detection: Pavlos Protopapas (Harvard) addressed methods of fast identification of events with low signal-to-noise ratios, enlarging on the characterization and statistical issues of low signal-to-noise ratios and rare events.

  14. Interference-free ultrasound imaging during HIFU therapy, using software tools

    NASA Technical Reports Server (NTRS)

    Vaezy, Shahram (Inventor); Held, Robert (Inventor); Sikdar, Siddhartha (Inventor); Managuli, Ravi (Inventor); Zderic, Vesna (Inventor)

    2010-01-01

    Disclosed herein is a method for obtaining a composite interference-free ultrasound image when non-imaging ultrasound waves would otherwise interfere with ultrasound imaging. A conventional ultrasound imaging system is used to collect frames of ultrasound image data in the presence of non-imaging ultrasound waves, such as high-intensity focused ultrasound (HIFU). The frames are directed to a processor that analyzes the frames to identify portions of the frame that are interference-free. Interference-free portions of a plurality of different ultrasound image frames are combined to generate a single composite interference-free ultrasound image that is displayed to a user. In this approach, a frequency of the non-imaging ultrasound waves is offset relative to a frequency of the ultrasound imaging waves, such that the interference introduced by the non-imaging ultrasound waves appears in a different portion of the frames.

  15. Intraindividual Coupling of Daily Stressors and Cognitive Interference in Old Age

    PubMed Central

    Mogle, Jacqueline; Sliwinski, Martin J.

    2011-01-01

    Objectives. The current study examined emotional and cognitive reactions to daily stress. We examined the psychometric properties of a short cognitive interference measure and how cognitive interference was associated with measures of daily stress and negative affect (NA) between persons and within persons over time. Methods. A sample of 87 older adults (Mage = 83, range = 70–97, 28% male) completed measures of daily stress, cognitive interference, and NA on 6 days within a 14-day period. Results. The measure yielded a single-factor solution with good reliability both between and within persons. At the between-person level, NA accounted for the effects of daily stress on individual differences in cognitive interference. At the within-person level, NA and daily stress were unique predictors of cognitive interference. Furthermore, the within-person effect of daily stress on cognitive interference decreased significantly with age. Discussion. These results support theoretical work regarding associations among stress, NA, and cognitive interference, both across persons and within persons over time. PMID:21743045

  16. Mediterranean Land Use and Land Cover Classification Assessment Using High Spatial Resolution Data

    NASA Astrophysics Data System (ADS)

    Elhag, Mohamed; Boteva, Silvena

    2016-10-01

    Landscape fragmentation is noticeably practiced in Mediterranean regions and imposes substantial complications in several satellite image classification methods. To some extent, high spatial resolution data were able to overcome such complications. For better classification performances in Land Use Land Cover (LULC) mapping, the current research adopts different classification methods comparison for LULC mapping using Sentinel-2 satellite as a source of high spatial resolution. Both of pixel-based and an object-based classification algorithms were assessed; the pixel-based approach employs Maximum Likelihood (ML), Artificial Neural Network (ANN) algorithms, Support Vector Machine (SVM), and, the object-based classification uses the Nearest Neighbour (NN) classifier. Stratified Masking Process (SMP) that integrates a ranking process within the classes based on spectral fluctuation of the sum of the training and testing sites was implemented. An analysis of the overall and individual accuracy of the classification results of all four methods reveals that the SVM classifier was the most efficient overall by distinguishing most of the classes with the highest accuracy. NN succeeded to deal with artificial surface classes in general while agriculture area classes, and forest and semi-natural area classes were segregated successfully with SVM. Furthermore, a comparative analysis indicates that the conventional classification method yielded better accuracy results than the SMP method overall with both classifiers used, ML and SVM.

  17. Coherent gradient sensing method and system for measuring surface curvature

    NASA Technical Reports Server (NTRS)

    Rosakis, Ares J. (Inventor); Moore, Jr., Nicholas R. (Inventor); Singh, Ramen P. (Inventor); Kolawa, Elizabeth (Inventor)

    2000-01-01

    A system and method for determining a curvature of a specularly reflective surface based on optical interference. Two optical gratings are used to produce a spatial displacement in an interference field of two different diffraction components produced by one grating from different diffraction components produced by another grating. Thus, the curvature of the surface can be determined.

  18. Development and evaluation of a culture-independent method for source determination of fecal wastes in surface and storm waters using reverse transcriptase-PCR detection of FRNA coliphage genogroup gene sequences.

    EPA Science Inventory

    A complete method, incorporating recently improved reverse transcriptase-PCR primer/probe assays and including controls for determining interferences to phage recoveries from water sample concentrates and for detecting interferences to their analysis, was developed for the direct...

  19. Influences of sample interference and interference controls on quantification of enterococci fecal indicator bacteria in surface water samples by the qPCR method

    EPA Science Inventory

    A quantitative polymerase chain reaction (qPCR) method for the detection of entercocci fecal indicator bacteria has been shown to be generally applicable for the analysis of temperate fresh (Great Lakes) and marine coastal waters and for providing risk-based determinations of wat...

  20. OPTOELECTRONICS, FIBER OPTICS, AND OTHER ASPECTS OF QUANTUM ELECTRONICS: Interference-threshold storage of optical data

    NASA Astrophysics Data System (ADS)

    Efimkov, V. F.; Zubarev, I. G.; Kolobrodov, V. V.; Sobolev, V. B.

    1989-08-01

    A method for the determination of the spatial characteristics of a laser beam is proposed and implemented. This method is based on the interaction of an interference field of two laser beams, which are spatially similar to the one being investigated, with a light-sensitive material characterized by a sensitivity threshold.

  1. Development and evaluation of a culture-independent method for source determination of fecal wastes in surface and storm waters using reverse transcriptase-PCR detection of FRNA coliphage genogroup gene sequences

    EPA Science Inventory

    A complete method, incorporating recently improved reverse transcriptase-PCR primer/probe assays and including controls for determining interferences to phage recoveries from water sample concentrates and for detecting interferences to their analysis, was developed for the direct...

  2. Micro-Doppler Ambiguity Resolution for Wideband Terahertz Radar Using Intra-Pulse Interference

    PubMed Central

    Yang, Qi; Qin, Yuliang; Deng, Bin; Wang, Hongqiang; You, Peng

    2017-01-01

    Micro-Doppler, induced by micro-motion of targets, is an important characteristic of target recognition once extracted via parameter estimation methods. However, micro-Doppler is usually too significant to result in ambiguity in the terahertz band because of its relatively high carrier frequency. Thus, a micro-Doppler ambiguity resolution method for wideband terahertz radar using intra-pulse interference is proposed in this paper. The micro-Doppler can be reduced several dozen times its true value to avoid ambiguity through intra-pulse interference processing. The effectiveness of this method is proved by experiments based on a 0.22 THz wideband radar system, and its high estimation precision and excellent noise immunity are verified by Monte Carlo simulation. PMID:28468257

  3. Micro-Doppler Ambiguity Resolution for Wideband Terahertz Radar Using Intra-Pulse Interference.

    PubMed

    Yang, Qi; Qin, Yuliang; Deng, Bin; Wang, Hongqiang; You, Peng

    2017-04-29

    Micro-Doppler, induced by micro-motion of targets, is an important characteristic of target recognition once extracted via parameter estimation methods. However, micro-Doppler is usually too significant to result in ambiguity in the terahertz band because of its relatively high carrier frequency. Thus, a micro-Doppler ambiguity resolution method for wideband terahertz radar using intra-pulse interference is proposed in this paper. The micro-Doppler can be reduced several dozen times its true value to avoid ambiguity through intra-pulse interference processing. The effectiveness of this method is proved by experiments based on a 0.22 THz wideband radar system, and its high estimation precision and excellent noise immunity are verified by Monte Carlo simulation.

  4. Rapid assessment of urban wetlands: Do hydrogeomorpic classification and reference criteria work?

    EPA Science Inventory

    The Hydrogeomorphic (HGM) functional assessment method is predicated on the ability of a wetland classification method based on hydrology (HGM classification) and a visual assessment of disturbance and alteration to provide reference standards against which functions in individua...

  5. LiFi: transforming fibre into wireless

    NASA Astrophysics Data System (ADS)

    Yin, Liang; Islim, Mohamed Sufyan; Haas, Harald

    2017-01-01

    Light-fidelity (LiFi) uses energy-efficient light-emitting diodes (LEDs) for high-speed wireless communication, and it has a great potential to be integrated with fibre communication for future gigabit networks. However, by making fibre communication wireless, multiuser interference arises. Traditional methods use orthogonal multiple access (OMA) for interference avoidance. In this paper, multiuser interference is exploited with the use of non-orthogonal multiple access (NOMA) relying on successive interference cancellation (SIC). The residual interference due to imperfect SIC in practical scenarios is characterized with a proportional model. Results show that NOMA offers 5 -10 dB gain on the equivalent signal-to-interference-plus-noise ratio (SINR) over OMA. The bit error rate (BER) performance of direct current optical orthogonal frequency division multiplexing (DCO-OFDM) is shown to be significantly improved when SIC is used.

  6. An improved method for determination of refractive index of absorbing films: A simulation study

    NASA Astrophysics Data System (ADS)

    Özcan, Seçkin; Coşkun, Emre; Kocahan, Özlem; Özder, Serhat

    2017-02-01

    In this work an improved version of the method presented by Gandhi was presented for determination of refractive index of absorbing films. In this method local maxima of consecutive interference order in transmittance spectrum are used. The method is based on the minimizing procedure leading to the determination of interference order accurately by using reasonable Cauchy parameters. It was tested on theoretically generated transmittance spectrum of absorbing film and the details of the minimization procedure were discussed.

  7. Assessment of land use and land cover change using spatiotemporal analysis of landscape: case study in south of Tehran.

    PubMed

    Sabr, Abutaleb; Moeinaddini, Mazaher; Azarnivand, Hossein; Guinot, Benjamin

    2016-12-01

    In the recent years, dust storms originating from local abandoned agricultural lands have increasingly impacted Tehran and Karaj air quality. Designing and implementing mitigation plans are necessary to study land use/land cover change (LUCC). Land use/cover classification is particularly relevant in arid areas. This study aimed to map land use/cover by pixel- and object-based image classification methods, analyse landscape fragmentation and determine the effects of two different classification methods on landscape metrics. The same sets of ground data were used for both classification methods. Because accuracy of classification plays a key role in better understanding LUCC, both methods were employed. Land use/cover maps of the southwest area of Tehran city for the years 1985, 2000 and 2014 were obtained from Landsat digital images and classified into three categories: built-up, agricultural and barren lands. The results of our LUCC analysis showed that the most important changes in built-up agricultural land categories were observed in zone B (Shahriar, Robat Karim and Eslamshahr) between 1985 and 2014. The landscape metrics obtained for all categories pictured high landscape fragmentation in the study area. Despite no significant difference was evidenced between the two classification methods, the object-based classification led to an overall higher accuracy than using the pixel-based classification. In particular, the accuracy of the built-up category showed a marked increase. In addition, both methods showed similar trends in fragmentation metrics. One of the reasons is that the object-based classification is able to identify buildings, impervious surface and roads in dense urban areas, which produced more accurate maps.

  8. Efficiency of the spectral-spatial classification of hyperspectral imaging data

    NASA Astrophysics Data System (ADS)

    Borzov, S. M.; Potaturkin, O. I.

    2017-01-01

    The efficiency of methods of the spectral-spatial classification of similarly looking types of vegetation on the basis of hyperspectral data of remote sensing of the Earth, which take into account local neighborhoods of analyzed image pixels, is experimentally studied. Algorithms that involve spatial pre-processing of the raw data and post-processing of pixel-based spectral classification maps are considered. Results obtained both for a large-size hyperspectral image and for its test fragment with different methods of training set construction are reported. The classification accuracy in all cases is estimated through comparisons of ground-truth data and classification maps formed by using the compared methods. The reasons for the differences in these estimates are discussed.

  9. Driver behavior profiling: An investigation with different smartphone sensors and machine learning

    PubMed Central

    Ferreira, Jair; Carvalho, Eduardo; Ferreira, Bruno V.; de Souza, Cleidson; Suhara, Yoshihiko; Pentland, Alex

    2017-01-01

    Driver behavior impacts traffic safety, fuel/energy consumption and gas emissions. Driver behavior profiling tries to understand and positively impact driver behavior. Usually driver behavior profiling tasks involve automated collection of driving data and application of computer models to generate a classification that characterizes the driver aggressiveness profile. Different sensors and classification methods have been employed in this task, however, low-cost solutions and high performance are still research targets. This paper presents an investigation with different Android smartphone sensors, and classification algorithms in order to assess which sensor/method assembly enables classification with higher performance. The results show that specific combinations of sensors and intelligent methods allow classification performance improvement. PMID:28394925

  10. Method for detection of dental caries and periodontal disease using optical imaging

    DOEpatents

    Nathel, Howard; Kinney, John H.; Otis, Linda L.

    1996-01-01

    A method for detecting the presence of active and inactive caries in teeth and diagnosing periodontal disease uses non-ionizing radiation with techniques for reducing interference from scattered light. A beam of non-ionizing radiation is divided into sample and reference beams. The region to be examined is illuminated by the sample beam, and reflected or transmitted radiation from the sample is recombined with the reference beam to form an interference pattern on a detector. The length of the reference beam path is adjustable, allowing the operator to select the reflected or transmitted sample photons that recombine with the reference photons. Thus radiation scattered by the dental or periodontal tissue can be prevented from obscuring the interference pattern. A series of interference patterns may be generated and interpreted to locate dental caries and periodontal tissue interfaces.

  11. Iris Image Classification Based on Hierarchical Visual Codebook.

    PubMed

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  12. Study of the atmospheric effects on the radiation detected by the sensor aboard orbiting platforms (ERTS/LANDSAT). M.S. Thesis - October 1978; [Ribeirao Preto and Brasilia, Brazil

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Morimoto, T.

    1980-01-01

    The author has identified the following significant results. Multispectral scanner data for Brasilia was corrected for atmospheric interference using the LOWTRAN-3 computer program and the analytical solution of the radiative transfer equation. This improved the contrast between two natural targets and the corrected images of two different dates were more similar than the original ones. Corrected images of MSS data for Ribeirao Preto gave a classification accuracy for sugar cane about 10% higher as compared to the original images.

  13. Iterative cross section sequence graph for handwritten character segmentation.

    PubMed

    Dawoud, Amer

    2007-08-01

    The iterative cross section sequence graph (ICSSG) is an algorithm for handwritten character segmentation. It expands the cross section sequence graph concept by applying it iteratively at equally spaced thresholds. The iterative thresholding reduces the effect of information loss associated with image binarization. ICSSG preserves the characters' skeletal structure by preventing the interference of pixels that causes flooding of adjacent characters' segments. Improving the structural quality of the characters' skeleton facilitates better feature extraction and classification, which improves the overall performance of optical character recognition (OCR). Experimental results showed significant improvements in OCR recognition rates compared to other well-established segmentation algorithms.

  14. A comparison of reliability of soil Cd determination by standard spectrometric methods

    PubMed Central

    McBride, M.B.

    2015-01-01

    Inductively coupled plasma emission spectrometry (ICP-OES) is the most common method for determination of soil Cd, yet spectral and matrix interferences affect measurements at the available analytical wavelengths for this metal. This study evaluated the severity of the interference over a range of total soil Cd by comparing ICP-OES and ICP-MS measurements of Cd in acid digests. ICP-OES using the emission at 226.5 nm generally unable to quantify soil Cd at low (near-background) levels, and gave unreliable values compared to ICP-MS. Using the line at 228.nm, a marked positive bias in Cd measurement (relative to the 226.5 nm measurement) was attributable to As interference even at soil As concentrations below 10 mg/kg. This spectral interference in ICP-OES was severe in As-contaminated orchard soils, giving a false value for soil total Cd near 2 mg kg−1 when soil As was 100–150 mg kg−1. In attempting to avoid these ICP emission-specific interferences, we evaluated a method to estimate total soil Cd using 1 M HNO3 extraction followed by determination of Cd by flame atomic absorption (FAA), either with or without pre-concentration of Cd using an Aliquat-heptanone extractant. The 1 M HNO3 extracted an average of 82% of total soil Cd. The FAA method had no significant interferences, and estimated the total Cd concentrations in all soils tested with acceptable accuracy. For Cd-contaminated soils, the Aliquat-heptanone pre-concentration step was not necessary, as FAA sensitivity was adequate for quantification of extractable soil Cd and reliable estimation of total soil Cd. PMID:22031569

  15. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection.

    PubMed

    Chen, Yifei; Sun, Yuxing; Han, Bing-Qing

    2015-01-01

    Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.

  16. Project implementation : classification of organic soils and classification of marls - training of INDOT personnel.

    DOT National Transportation Integrated Search

    2012-09-01

    This is an implementation project for the research completed as part of the following projects: SPR3005 Classification of Organic Soils : and SPR3227 Classification of Marl Soils. The methods developed for the classification of both soi...

  17. Prostate segmentation by sparse representation based classification

    PubMed Central

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-01-01

    Purpose: The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. Methods: To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by the elastic net in order to obtain a smooth and clear prostate boundary in the classification result. (3) Residue-based linear regression is incorporated to improve the classification performance and to extend SRC from hard classification to soft classification. (4) Iterative SRC is proposed by using context information to iteratively refine the classification results. Results: The proposed method has been comprehensively evaluated on a dataset consisting of 330 CT images from 24 patients. The effectiveness of the extended SRC has been validated by comparing it with the traditional SRC based on the proposed four extensions. The experimental results show that our extended SRC can obtain not only more accurate classification results but also smoother and clearer prostate boundary than the traditional SRC. Besides, the comparison with other five state-of-the-art prostate segmentation methods indicates that our method can achieve better performance than other methods under comparison. Conclusions: The authors have proposed a novel prostate segmentation method based on the sparse representation based classification, which can achieve considerably accurate segmentation results in CT prostate segmentation. PMID:23039673

  18. Cupping artifact correction and automated classification for high-resolution dedicated breast CT images

    PubMed Central

    Yang, Xiaofeng; Wu, Shengyong; Sechopoulos, Ioannis; Fei, Baowei

    2012-01-01

    Purpose: To develop and test an automated algorithm to classify the different tissues present in dedicated breast CT images. Methods: The original CT images are first corrected to overcome cupping artifacts, and then a multiscale bilateral filter is used to reduce noise while keeping edge information on the images. As skin and glandular tissues have similar CT values on breast CT images, morphologic processing is used to identify the skin mask based on its position information. A modified fuzzy C-means (FCM) classification method is then used to classify breast tissue as fat and glandular tissue. By combining the results of the skin mask with the FCM, the breast tissue is classified as skin, fat, and glandular tissue. To evaluate the authors’ classification method, the authors use Dice overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on eight patient images. Results: The correction method was able to correct the cupping artifacts and improve the quality of the breast CT images. For glandular tissue, the overlap ratios between the authors’ automatic classification and manual segmentation were 91.6% ± 2.0%. Conclusions: A cupping artifact correction method and an automatic classification method were applied and evaluated for high-resolution dedicated breast CT images. Breast tissue classification can provide quantitative measurements regarding breast composition, density, and tissue distribution. PMID:23039675

  19. Developmental trends in the interaction between auditory and linguistic processing.

    PubMed

    Jerger, S; Pirozzolo, F; Jerger, J; Elizondo, R; Desai, S; Wright, E; Reynosa, R

    1993-09-01

    The developmental course of multidimensional speech processing was examined in 80 children between 3 and 6 years of age and in 60 adults between 20 and 86 years of age. Processing interactions were assessed with a speeded classification task (Garner, 1974a), which required the subjects to attend selectively to the voice dimension while ignoring the linguistic dimension, and vice versa. The children and adults exhibited both similarities and differences in the patterns of processing dependencies. For all ages, performance for each dimension was slower in the presence of variation in the irrelevant dimension; irrelevant variation in the voice dimension disrupted performance more than irrelevant variation in the linguistic dimension. Trends in the degree of interference, on the other hand, showed significant differences between dimensions as a function of age. Whereas the degree of interference for the voice-dimension-relevant did not show significant age-related change, the degree of interference for the word-dimension-relevant declined significantly with age in a linear as well as a quadratic manner. A major age-related change in the relation between dimensions was that word processing, relative to voice-gender processing, required significantly more time in the children than in the adults. Overall, the developmental course characterizing multidimensional speech processing evidenced more pronounced change when the linguistic dimension, rather than the voice dimension, was relevant.

  20. Does (131)I Radioactivity Interfere with Thyroglobulin Measurement in Patients Undergoing Radioactive Iodine Therapy with Recombinant Human TSH?

    PubMed

    Park, Sohyun; Bang, Ji-In; Lee, Ho-Young; Kim, Sang-Eun

    2015-06-01

    Recombinant human thyroid-stimulating hormone (rhTSH) is widely used in radioactive iodine therapy (RIT) to avoid side effects caused by hypothyroidism during the therapy. Owing to RIT with rhTSH, serum thyroglobulin (Tg) is measured with high (131)I concentrations. It is of concern that the relatively high energy of (131)I could interfere with Tg measurement using the immunoradiometric assay (IRMA). We investigated the effect of (131)I administration on Tg measurement with IRMA after RIT. A total of 67 patients with thyroid cancer were analysed retrospectively. All patients had undergone rhTSH stimulation for RIT. The patients' sera were sampled 2 days after (131)I administration and divided into two portions: for Tg measurements on days 2 and 32 after (131)I administration. The count per minute (CPM) of whole serum (200 μl) was also measured at each time point. Student's paired t-test and Pearson's correlation analyses were performed for statistical analysis. Serum Tg levels were significantly concordant between days 2 and 32, irrespective of the serum CPM. Subgroup analysis was performed by classification based on the (131)I dose. No difference was noted between the results of the two groups. IRMA using (125)I did not show interference from (131)I in the serum of patients stimulated by rhTSH.

  1. Multi-Temporal Classification and Change Detection Using Uav Images

    NASA Astrophysics Data System (ADS)

    Makuti, S.; Nex, F.; Yang, M. Y.

    2018-05-01

    In this paper different methodologies for the classification and change detection of UAV image blocks are explored. UAV is not only the cheapest platform for image acquisition but it is also the easiest platform to operate in repeated data collections over a changing area like a building construction site. Two change detection techniques have been evaluated in this study: the pre-classification and the post-classification algorithms. These methods are based on three main steps: feature extraction, classification and change detection. A set of state of the art features have been used in the tests: colour features (HSV), textural features (GLCM) and 3D geometric features. For classification purposes Conditional Random Field (CRF) has been used: the unary potential was determined using the Random Forest algorithm while the pairwise potential was defined by the fully connected CRF. In the performed tests, different feature configurations and settings have been considered to assess the performance of these methods in such challenging task. Experimental results showed that the post-classification approach outperforms the pre-classification change detection method. This was analysed using the overall accuracy, where by post classification have an accuracy of up to 62.6 % and the pre classification change detection have an accuracy of 46.5 %. These results represent a first useful indication for future works and developments.

  2. Lenke and King classification systems for adolescent idiopathic scoliosis: interobserver agreement and postoperative results

    PubMed Central

    Hosseinpour-Feizi, Hojjat; Soleimanpour, Jafar; Sales, Jafar Ganjpour; Arzroumchilar, Ali

    2011-01-01

    Purpose The aim of this study was to investigate the interobserver agreement of the Lenke and King classifications for adolescent idiopathic scoliosis, and to compare the results of surgery performed based on classification of the scoliosis according to each of these classification systems. Methods The study was conducted in Shohada Hospital in Tabriz, Iran, between 2009 and 2010. First, a reliability assessment was undertaken to assess interobserver agreement of the Lenke and King classifications for adolescent idiopathic scoliosis. Second, postoperative efficacy and safety of surgery performed based on the Lenke and King classifications were compared. Kappa coefficients of agreement were calculated to assess the agreement. Outcomes were compared using bivariate tests and repeated measures analysis of variance. Results A low to moderate interobserver agreement was observed for the King classification; the Lenke classification yielded mostly high agreement coefficients. The outcome of surgery was not found to be substantially different between the two systems. Conclusion Based on the results, the Lenke classification method seems advantageous. This takes into consideration the Lenke classification’s priority in providing details of curvatures in different anatomical surfaces to explain precise intensity of scoliosis, that it has higher interobserver agreement scores, and also that it leads to noninferior postoperative results compared with the King classification method. PMID:22267934

  3. Hyperspectral Image Classification via Multitask Joint Sparse Representation and Stepwise MRF Optimization.

    PubMed

    Yuan, Yuan; Lin, Jianzhe; Wang, Qi

    2016-12-01

    Hyperspectral image (HSI) classification is a crucial issue in remote sensing. Accurate classification benefits a large number of applications such as land use analysis and marine resource utilization. But high data correlation brings difficulty to reliable classification, especially for HSI with abundant spectral information. Furthermore, the traditional methods often fail to well consider the spatial coherency of HSI that also limits the classification performance. To address these inherent obstacles, a novel spectral-spatial classification scheme is proposed in this paper. The proposed method mainly focuses on multitask joint sparse representation (MJSR) and a stepwise Markov random filed framework, which are claimed to be two main contributions in this procedure. First, the MJSR not only reduces the spectral redundancy, but also retains necessary correlation in spectral field during classification. Second, the stepwise optimization further explores the spatial correlation that significantly enhances the classification accuracy and robustness. As far as several universal quality evaluation indexes are concerned, the experimental results on Indian Pines and Pavia University demonstrate the superiority of our method compared with the state-of-the-art competitors.

  4. Classification of weld defect based on information fusion technology for radiographic testing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Hongquan; Liang, Zeming, E-mail: heavenlzm@126.com; Gao, Jianmin

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster–Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defectmore » feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.« less

  5. Classification of weld defect based on information fusion technology for radiographic testing system.

    PubMed

    Jiang, Hongquan; Liang, Zeming; Gao, Jianmin; Dang, Changying

    2016-03-01

    Improving the efficiency and accuracy of weld defect classification is an important technical problem in developing the radiographic testing system. This paper proposes a novel weld defect classification method based on information fusion technology, Dempster-Shafer evidence theory. First, to characterize weld defects and improve the accuracy of their classification, 11 weld defect features were defined based on the sub-pixel level edges of radiographic images, four of which are presented for the first time in this paper. Second, we applied information fusion technology to combine different features for weld defect classification, including a mass function defined based on the weld defect feature information and the quartile-method-based calculation of standard weld defect class which is to solve a sample problem involving a limited number of training samples. A steam turbine weld defect classification case study is also presented herein to illustrate our technique. The results show that the proposed method can increase the correct classification rate with limited training samples and address the uncertainties associated with weld defect classification.

  6. Self-limiting filters for band-selective interferer rejection or cognitive receiver protection

    DOEpatents

    Nordquist, Christopher; Scott, Sean Michael; Custer, Joyce Olsen; Leonhardt, Darin; Jordan, Tyler Scott; Rodenbeck, Christopher T.; Clem, Paul G.; Hunker, Jeff; Wolfley, Steven L.

    2017-03-07

    The present invention related to self-limiting filters, arrays of such filters, and methods thereof. In particular embodiments, the filters include a metal transition film (e.g., a VO.sub.2 film) capable of undergoing a phase transition that modifies the film's resistivity. Arrays of such filters could allow for band-selective interferer rejection, while permitting transmission of non-interferer signals.

  7. Prevalence of rheumatoid arthritis in persons 60 years of age and older in the United States: effect of different methods of case classification.

    PubMed

    Rasch, Elizabeth K; Hirsch, Rosemarie; Paulose-Ram, Ryne; Hochberg, Marc C

    2003-04-01

    To determine prevalence estimates for rheumatoid arthritis (RA) in noninstitutionalized older adults in the US. Prevalence estimates were compared using 3 different classification methods based on current classification criteria for RA. Data from the Third National Health and Nutrition Examination Survey (NHANES-III) were used to generate prevalence estimates by 3 classification methods in persons 60 years of age and older (n = 5,302). Method 1 applied the "n of k" rule, such that subjects who met 3 of 6 of the American College of Rheumatology (ACR) 1987 criteria were classified as having RA (data from hand radiographs were not available). In method 2, the ACR classification tree algorithm was applied. For method 3, medication data were used to augment case identification via method 2. Population prevalence estimates and 95% confidence intervals (95% CIs) were determined using the 3 methods on data stratified by sex, race/ethnicity, age, and education. Overall prevalence estimates using the 3 classification methods were 2.03% (95% CI 1.30-2.76), 2.15% (95% CI 1.43-2.87), and 2.34% (95% CI 1.66-3.02), respectively. The prevalence of RA was generally greater in the following groups: women, Mexican Americans, respondents with less education, and respondents who were 70 years of age and older. The prevalence of RA in persons 60 years of age and older is approximately 2%, representing the proportion of the US elderly population who will most likely require medical intervention because of disease activity. Different classification methods yielded similar prevalence estimates, although detection of RA was enhanced by incorporation of data on use of prescription medications, an important consideration in large population surveys.

  8. The Real-Time Wall Interference Correction System of the NASA Ames 12-Foot Pressure Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert

    1998-01-01

    An improved version of the Wall Signature Method was developed to compute wall interference effects in three-dimensional subsonic wind tunnel testing of aircraft models in real-time. The method may be applied to a full-span or a semispan model. A simplified singularity representation of the aircraft model is used. Fuselage, support system, propulsion simulator, and separation wake volume blockage effects are represented by point sources and sinks. Lifting effects are represented by semi-infinite line doublets. The singularity representation of the test article is combined with the measurement of wind tunnel test reference conditions, wall pressure, lift force, thrust force, pitching moment, rolling moment, and pre-computed solutions of the subsonic potential equation to determine first order wall interference corrections. Second order wall interference corrections for pitching and rolling moment coefficient are also determined. A new procedure is presented that estimates a rolling moment coefficient correction for wings with non-symmetric lift distribution. Experimental data obtained during the calibration of the Ames Bipod model support system and during tests of two semispan models mounted on an image plane in the NASA Ames 12 ft. Pressure Wind Tunnel are used to demonstrate the application of the wall interference correction method.

  9. Characterization of Escherichia coli isolates from different fecal sources by means of classification tree analysis of fatty acid methyl ester (FAME) profiles.

    PubMed

    Seurinck, Sylvie; Deschepper, Ellen; Deboch, Bishaw; Verstraete, Willy; Siciliano, Steven

    2006-03-01

    Microbial source tracking (MST) methods need to be rapid, inexpensive and accurate. Unfortunately, many MST methods provide a wealth of information that is difficult to interpret by the regulators who use this information to make decisions. This paper describes the use of classification tree analysis to interpret the results of a MST method based on fatty acid methyl ester (FAME) profiles of Escherichia coli isolates, and to present results in a format readily interpretable by water quality managers. Raw sewage E. coli isolates and animal E. coli isolates from cow, dog, gull, and horse were isolated and their FAME profiles collected. Correct classification rates determined with leaveone-out cross-validation resulted in an overall low correct classification rate of 61%. A higher overall correct classification rate of 85% was obtained when the animal isolates were pooled together and compared to the raw sewage isolates. Bootstrap aggregation or adaptive resampling and combining of the FAME profile data increased correct classification rates substantially. Other MST methods may be better suited to differentiate between different fecal sources but classification tree analysis has enabled us to distinguish raw sewage from animal E. coli isolates, which previously had not been possible with other multivariate methods such as principal component analysis and cluster analysis.

  10. Cascade classification of endocytoscopic images of colorectal lesions for automated pathological diagnosis

    NASA Astrophysics Data System (ADS)

    Itoh, Hayato; Mori, Yuichi; Misawa, Masashi; Oda, Masahiro; Kudo, Shin-ei; Mori, Kensaku

    2018-02-01

    This paper presents a new classification method for endocytoscopic images. Endocytoscopy is a new endoscope that enables us to perform conventional endoscopic observation and ultramagnified observation of cell level. This ultramagnified views (endocytoscopic images) make possible to perform pathological diagnosis only on endo-scopic views of polyps during colonoscopy. However, endocytoscopic image diagnosis requires higher experiences for physicians. An automated pathological diagnosis system is required to prevent the overlooking of neoplastic lesions in endocytoscopy. For this purpose, we propose a new automated endocytoscopic image classification method that classifies neoplastic and non-neoplastic endocytoscopic images. This method consists of two classification steps. At the first step, we classify an input image by support vector machine. We forward the image to the second step if the confidence of the first classification is low. At the second step, we classify the forwarded image by convolutional neural network. We reject the input image if the confidence of the second classification is also low. We experimentally evaluate the classification performance of the proposed method. In this experiment, we use about 16,000 and 4,000 colorectal endocytoscopic images as training and test data, respectively. The results show that the proposed method achieves high sensitivity 93.4% with small rejection rate 9.3% even for difficult test data.

  11. A Two-Layer Method for Sedentary Behaviors Classification Using Smartphone and Bluetooth Beacons.

    PubMed

    Cerón, Jesús D; López, Diego M; Hofmann, Christian

    2017-01-01

    Among the factors that outline the health of populations, person's lifestyle is the more important one. This work focuses on the caracterization and prevention of sedentary lifestyles. A sedentary behavior is defined as "any waking behavior characterized by an energy expenditure of 1.5 METs (Metabolic Equivalent) or less while in a sitting or reclining posture". To propose a method for sedentary behaviors classification using a smartphone and Bluetooth beacons considering different types of classification models: personal, hybrid or impersonal. Following the CRISP-DM methodology, a method based on a two-layer approach for the classification of sedentary behaviors is proposed. Using data collected from a smartphones' accelerometer, gyroscope and barometer; the first layer classifies between performing a sedentary behavior and not. The second layer of the method classifies the specific sedentary activity performed using only the smartphone's accelerometer and barometer data, but adding indoor location data, using Bluetooth Low Energy (BLE) beacons. To improve the precision of the classification, both layers implemented the Random Forest algorithm and the personal model. This study presents the first available method for the automatic classification of specific sedentary behaviors. The layered classification approach has the potential to improve processing, memory and energy consumption of mobile devices and wearables used.

  12. A method of assigning socio-economic status classification to British Armed Forces personnel.

    PubMed

    Yoong, S Y; Miles, D; McKinney, P A; Smith, I J; Spencer, N J

    1999-10-01

    The objective of this paper was to develop and evaluate a socio-economic status classification method for British Armed Forces personnel. Two study groups comprising of civilian and Armed Forces families were identified from livebirths delivered between 1 January-30 June 1996 within the Northallerton Health district which includes Catterick Garrison and RAF Leeming. The participants were the parents of babies delivered at a District General Hospital, comprising of 436 civilian and 162 Armed Forces families. A new classification method was successfully used to assign Registrar General's social classification to Armed Forces personnel. Comparison of the two study groups showed a significant difference in social class distribution (p = 0.0001). This study has devised a new method for classifying occupations within the Armed Forces to categories of social class thus permitting comparison with Registrar General's classification.

  13. Landcover Classification Using Deep Fully Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Wang, J.; Li, X.; Zhou, S.; Tang, J.

    2017-12-01

    Land cover classification has always been an essential application in remote sensing. Certain image features are needed for land cover classification whether it is based on pixel or object-based methods. Different from other machine learning methods, deep learning model not only extracts useful information from multiple bands/attributes, but also learns spatial characteristics. In recent years, deep learning methods have been developed rapidly and widely applied in image recognition, semantic understanding, and other application domains. However, there are limited studies applying deep learning methods in land cover classification. In this research, we used fully convolutional networks (FCN) as the deep learning model to classify land covers. The National Land Cover Database (NLCD) within the state of Kansas was used as training dataset and Landsat images were classified using the trained FCN model. We also applied an image segmentation method to improve the original results from the FCN model. In addition, the pros and cons between deep learning and several machine learning methods were compared and explored. Our research indicates: (1) FCN is an effective classification model with an overall accuracy of 75%; (2) image segmentation improves the classification results with better match of spatial patterns; (3) FCN has an excellent ability of learning which can attains higher accuracy and better spatial patterns compared with several machine learning methods.

  14. A method for classification of multisource data using interval-valued probabilities and its application to HIRIS data

    NASA Technical Reports Server (NTRS)

    Kim, H.; Swain, P. H.

    1991-01-01

    A method of classifying multisource data in remote sensing is presented. The proposed method considers each data source as an information source providing a body of evidence, represents statistical evidence by interval-valued probabilities, and uses Dempster's rule to integrate information based on multiple data source. The method is applied to the problems of ground-cover classification of multispectral data combined with digital terrain data such as elevation, slope, and aspect. Then this method is applied to simulated 201-band High Resolution Imaging Spectrometer (HIRIS) data by dividing the dimensionally huge data source into smaller and more manageable pieces based on the global statistical correlation information. It produces higher classification accuracy than the Maximum Likelihood (ML) classification method when the Hughes phenomenon is apparent.

  15. Advances in variable selection methods II: Effect of variable selection method on classification of hydrologically similar watersheds in three Mid-Atlantic ecoregions

    EPA Science Inventory

    Hydrological flow predictions in ungauged and sparsely gauged watersheds use regionalization or classification of hydrologically similar watersheds to develop empirical relationships between hydrologic, climatic, and watershed variables. The watershed classifications may be based...

  16. A New Direction of Cancer Classification: Positive Effect of Low-Ranking MicroRNAs.

    PubMed

    Li, Feifei; Piao, Minghao; Piao, Yongjun; Li, Meijing; Ryu, Keun Ho

    2014-10-01

    Many studies based on microRNA (miRNA) expression profiles showed a new aspect of cancer classification. Because one characteristic of miRNA expression data is the high dimensionality, feature selection methods have been used to facilitate dimensionality reduction. The feature selection methods have one shortcoming thus far: they just consider the problem of where feature to class is 1:1 or n:1. However, because one miRNA may influence more than one type of cancer, human miRNA is considered to be ranked low in traditional feature selection methods and are removed most of the time. In view of the limitation of the miRNA number, low-ranking miRNAs are also important to cancer classification. We considered both high- and low-ranking features to cover all problems (1:1, n:1, 1:n, and m:n) in cancer classification. First, we used the correlation-based feature selection method to select the high-ranking miRNAs, and chose the support vector machine, Bayes network, decision tree, k-nearest-neighbor, and logistic classifier to construct cancer classification. Then, we chose Chi-square test, information gain, gain ratio, and Pearson's correlation feature selection methods to build the m:n feature subset, and used the selected miRNAs to determine cancer classification. The low-ranking miRNA expression profiles achieved higher classification accuracy compared with just using high-ranking miRNAs in traditional feature selection methods. Our results demonstrate that the m:n feature subset made a positive impression of low-ranking miRNAs in cancer classification.

  17. Integration of Chinese medicine with Western medicine could lead to future medicine: molecular module medicine.

    PubMed

    Zhang, Chi; Zhang, Ge; Chen, Ke-ji; Lu, Ai-ping

    2016-04-01

    The development of an effective classification method for human health conditions is essential for precise diagnosis and delivery of tailored therapy to individuals. Contemporary classification of disease systems has properties that limit its information content and usability. Chinese medicine pattern classification has been incorporated with disease classification, and this integrated classification method became more precise because of the increased understanding of the molecular mechanisms. However, we are still facing the complexity of diseases and patterns in the classification of health conditions. With continuing advances in omics methodologies and instrumentation, we are proposing a new classification approach: molecular module classification, which is applying molecular modules to classifying human health status. The initiative would be precisely defining the health status, providing accurate diagnoses, optimizing the therapeutics and improving new drug discovery strategy. Therefore, there would be no current disease diagnosis, no disease pattern classification, and in the future, a new medicine based on this classification, molecular module medicine, could redefine health statuses and reshape the clinical practice.

  18. Classification methods to detect sleep apnea in adults based on respiratory and oximetry signals: a systematic review.

    PubMed

    Uddin, M B; Chow, C M; Su, S W

    2018-03-26

    Sleep apnea (SA), a common sleep disorder, can significantly decrease the quality of life, and is closely associated with major health risks such as cardiovascular disease, sudden death, depression, and hypertension. The normal diagnostic process of SA using polysomnography is costly and time consuming. In addition, the accuracy of different classification methods to detect SA varies with the use of different physiological signals. If an effective, reliable, and accurate classification method is developed, then the diagnosis of SA and its associated treatment will be time-efficient and economical. This study aims to systematically review the literature and present an overview of classification methods to detect SA using respiratory and oximetry signals and address the automated detection approach. Sixty-two included studies revealed the application of single and multiple signals (respiratory and oximetry) for the diagnosis of SA. Both airflow and oxygen saturation signals alone were effective in detecting SA in the case of binary decision-making, whereas multiple signals were good for multi-class detection. In addition, some machine learning methods were superior to the other classification methods for SA detection using respiratory and oximetry signals. To deal with the respiratory and oximetry signals, a good choice of classification method as well as the consideration of associated factors would result in high accuracy in the detection of SA. An accurate classification method should provide a high detection rate with an automated (independent of human action) analysis of respiratory and oximetry signals. Future high-quality automated studies using large samples of data from multiple patient groups or record batches are recommended.

  19. A Novel Hybrid Classification Model of Genetic Algorithms, Modified k-Nearest Neighbor and Developed Backpropagation Neural Network

    PubMed Central

    Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy

    2014-01-01

    Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the proposed model in terms of classification accuracy is desirable, promising, and competitive to the existing state-of-the-art classification models. PMID:25419659

  20. Engineering calculations for the Delta S method of solving the orbital allotment problem

    NASA Technical Reports Server (NTRS)

    Kohnhorst, P. A.; Levis, C. A.; Walton, E. K.

    1987-01-01

    The method of calculating single-entry separation requirements for pairs of satellites is extended to include the interference on the top link as well as on the down link. Several heuristic models for analyzing the effects of shaped-beam antenna designs on required satellite separations are introduced and demonstrated with gain contour plots. The calculation of aggregate interference is extended to include the effects of up-link interference. The relationship between the single-entry C/I requirements, used in determining satellite separation constraints for various optimization procedures, and the aggregate C/I values of the resulting solutions is discussed.

  1. Speech-Message Extraction from Interference Introduced by External Distributed Sources

    NASA Astrophysics Data System (ADS)

    Kanakov, V. A.; Mironov, N. A.

    2017-08-01

    The problem of this study involves the extraction of a speech signal originating from a certain spatial point and calculation of the intelligibility of the extracted voice message. It is solved by the method of decreasing the influence of interference from the speech-message sources on the extracted signal. This method is based on introducing the time delays, which depend on the spatial coordinates, to the recording channels. Audio records of the voices of eight different people were used as test objects during the studies. It is proved that an increase in the number of microphones improves intelligibility of the speech message which is extracted from interference.

  2. [Study of Cervical Exfoliated Cell's DNA Quantitative Analysis Based on Multi-Spectral Imaging Technology].

    PubMed

    Wu, Zheng; Zeng, Li-bo; Wu, Qiong-shui

    2016-02-01

    The conventional cervical cancer screening methods mainly include TBS (the bethesda system) classification method and cellular DNA quantitative analysis, however, by using multiple staining method in one cell slide, which is staining the cytoplasm with Papanicolaou reagent and the nucleus with Feulgen reagent, the study of achieving both two methods in the cervical cancer screening at the same time is still blank. Because the difficulty of this multiple staining method is that the absorbance of the non-DNA material may interfere with the absorbance of DNA, so that this paper has set up a multi-spectral imaging system, and established an absorbance unmixing model by using multiple linear regression method based on absorbance's linear superposition character, and successfully stripped out the absorbance of DNA to run the DNA quantitative analysis, and achieved the perfect combination of those two kinds of conventional screening method. Through a series of experiment we have proved that between the absorbance of DNA which is calculated by the absorbance unmixxing model and the absorbance of DNA which is measured there is no significant difference in statistics when the test level is 1%, also the result of actual application has shown that there is no intersection between the confidence interval of the DNA index of the tetraploid cells which are screened by using this paper's analysis method when the confidence level is 99% and the DNA index's judging interval of cancer cells, so that the accuracy and feasibility of the quantitative DNA analysis with multiple staining method expounded by this paper have been verified, therefore this analytical method has a broad application prospect and considerable market potential in early diagnosis of cervical cancer and other cancers.

  3. A Full-Core Resonance Self-Shielding Method Using a Continuous-Energy Quasi–One-Dimensional Slowing-Down Solution that Accounts for Temperature-Dependent Fuel Subregions and Resonance Interference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yuxuan; Martin, William; Williams, Mark

    In this paper, a correction-based resonance self-shielding method is developed that allows annular subdivision of the fuel rod. The method performs the conventional iteration of the embedded self-shielding method (ESSM) without subdivision of the fuel to capture the interpin shielding effect. The resultant self-shielded cross sections are modified by correction factors incorporating the intrapin effects of radial variation of the shielded cross section, radial temperature distribution, and resonance interference. A quasi–one-dimensional slowing-down equation is developed to calculate such correction factors. The method is implemented in the DeCART code and compared with the conventional ESSM and subgroup method with benchmark MCNPmore » results. The new method yields substantially improved results for both spatially dependent reaction rates and eigenvalues for typical pressurized water reactor pin cell cases with uniform and nonuniform fuel temperature profiles. Finally, the new method is also proved effective in treating assembly heterogeneity and complex material composition such as mixed oxide fuel, where resonance interference is much more intense.« less

  4. The Classification of Tongue Colors with Standardized Acquisition and ICC Profile Correction in Traditional Chinese Medicine

    PubMed Central

    Tu, Li-ping; Chen, Jing-bo; Hu, Xiao-juan; Zhang, Zhi-feng

    2016-01-01

    Background and Goal. The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods. Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results. The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions. At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible. PMID:28050555

  5. The Classification of Tongue Colors with Standardized Acquisition and ICC Profile Correction in Traditional Chinese Medicine.

    PubMed

    Qi, Zhen; Tu, Li-Ping; Chen, Jing-Bo; Hu, Xiao-Juan; Xu, Jia-Tuo; Zhang, Zhi-Feng

    2016-01-01

    Background and Goal . The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods . Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results . The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions . At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible.

  6. Obtaining tight bounds on higher-order interferences with a 5-path interferometer

    NASA Astrophysics Data System (ADS)

    Kauten, Thomas; Keil, Robert; Kaufmann, Thomas; Pressl, Benedikt; Brukner, Časlav; Weihs, Gregor

    2017-03-01

    Within the established theoretical framework of quantum mechanics, interference always occurs between pairs of paths through an interferometer. Higher order interferences with multiple constituents are excluded by Born’s rule and can only exist in generalized probabilistic theories. Thus, high-precision experiments searching for such higher order interferences are a powerful method to distinguish between quantum mechanics and more general theories. Here, we perform such a test in an optical multi-path interferometer, which avoids crucial systematic errors, has access to the entire phase space and is more stable than previous experiments. Our results are in accordance with quantum mechanics and rule out the existence of higher order interference terms in optical interferometry to an extent that is more than four orders of magnitude smaller than the expected pairwise interference, refining previous bounds by two orders of magnitude.

  7. Using classification and NDVI differencing methods for monitoring sparse vegetation coverage: a case study of saltcedar in Nevada, USA.

    USDA-ARS?s Scientific Manuscript database

    A change detection experiment for an invasive species, saltcedar, near Lovelock, Nevada, was conducted with multi-date Compact Airborne Spectrographic Imager (CASI) hyperspectral datasets. Classification and NDVI differencing change detection methods were tested, In the classification strategy, a p...

  8. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  9. Integrated Droplet-Based Microextraction with ESI-MS for Removal of Matrix Interference in Single-Cell Analysis.

    PubMed

    Zhang, Xiao-Chao; Wei, Zhen-Wei; Gong, Xiao-Yun; Si, Xing-Yu; Zhao, Yao-Yao; Yang, Cheng-Dui; Zhang, Si-Chun; Zhang, Xin-Rong

    2016-04-29

    Integrating droplet-based microfluidics with mass spectrometry is essential to high-throughput and multiple analysis of single cells. Nevertheless, matrix effects such as the interference of culture medium and intracellular components influence the sensitivity and the accuracy of results in single-cell analysis. To resolve this problem, we developed a method that integrated droplet-based microextraction with single-cell mass spectrometry. Specific extraction solvent was used to selectively obtain intracellular components of interest and remove interference of other components. Using this method, UDP-Glc-NAc, GSH, GSSG, AMP, ADP and ATP were successfully detected in single MCF-7 cells. We also applied the method to study the change of unicellular metabolites in the biological process of dysfunctional oxidative phosphorylation. The method could not only realize matrix-free, selective and sensitive detection of metabolites in single cells, but also have the capability for reliable and high-throughput single-cell analysis.

  10. Compressive sensing sectional imaging for single-shot in-line self-interference incoherent holography

    NASA Astrophysics Data System (ADS)

    Weng, Jiawen; Clark, David C.; Kim, Myung K.

    2016-05-01

    A numerical reconstruction method based on compressive sensing (CS) for self-interference incoherent digital holography (SIDH) is proposed to achieve sectional imaging by single-shot in-line self-interference incoherent hologram. The sensing operator is built up based on the physical mechanism of SIDH according to CS theory, and a recovery algorithm is employed for image restoration. Numerical simulation and experimental studies employing LEDs as discrete point-sources and resolution targets as extended sources are performed to demonstrate the feasibility and validity of the method. The intensity distribution and the axial resolution along the propagation direction of SIDH by angular spectrum method (ASM) and by CS are discussed. The analysis result shows that compared to ASM the reconstruction by CS can improve the axial resolution of SIDH, and achieve sectional imaging. The proposed method may be useful to 3D analysis of dynamic systems.

  11. Conductive interference in rapid transit signaling systems. volume 2. suggested test procedures

    DOT National Transportation Integrated Search

    1987-05-31

    Methods for detecting and quantifying the levels of conductive electromagnetic interference produced by solid state rapid transit propulsion equipment and for determining the susceptibility of signaling systems to these emissions are presented. These...

  12. Method for detection of dental caries and periodontal disease using optical imaging

    DOEpatents

    Nathel, H.; Kinney, J.H.; Otis, L.L.

    1996-10-29

    A method is disclosed for detecting the presence of active and inactive caries in teeth and diagnosing periodontal disease uses non-ionizing radiation with techniques for reducing interference from scattered light. A beam of non-ionizing radiation is divided into sample and reference beams. The region to be examined is illuminated by the sample beam, and reflected or transmitted radiation from the sample is recombined with the reference beam to form an interference pattern on a detector. The length of the reference beam path is adjustable, allowing the operator to select the reflected or transmitted sample photons that recombine with the reference photons. Thus radiation scattered by the dental or periodontal tissue can be prevented from obscuring the interference pattern. A series of interference patterns may be generated and interpreted to locate dental caries and periodontal tissue interfaces. 7 figs.

  13. Apparatus, system, and method for laser-induced breakdown spectroscopy

    DOEpatents

    Effenberger, Jr., Andrew J; Scott, Jill R; McJunkin, Timothy R

    2014-11-18

    In laser-induced breakdown spectroscopy (LIBS), an apparatus includes a pulsed laser configured to generate a pulsed laser signal toward a sample, a constructive interference object and an optical element, each located in a path of light from the sample. The constructive interference object is configured to generate constructive interference patterns of the light. The optical element is configured to disperse the light. A LIBS system includes a first and a second optical element, and a data acquisition module. The data acquisition module is configured to determine an isotope measurement based, at least in part, on light received by an image sensor from the first and second optical elements. A method for performing LIBS includes generating a pulsed laser on a sample to generate light from a plasma, generating constructive interference patterns of the light, and dispersing the light into a plurality of wavelengths.

  14. Integrating the ECG power-line interference removal methods with rule-based system.

    PubMed

    Kumaravel, N; Senthil, A; Sridhar, K S; Nithiyanandam, N

    1995-01-01

    The power-line frequency interference in electrocardiographic signals is eliminated to enhance the signal characteristics for diagnosis. The power-line frequency normally varies +/- 1.5 Hz from its standard value of 50 Hz. In the present work, the performances of the linear FIR filter, Wave digital filter (WDF) and adaptive filter for the power-line frequency variations from 48.5 to 51.5 Hz in steps of 0.5 Hz are studied. The advantage of the LMS adaptive filter in the removal of power-line frequency interference even if the frequency of interference varies by +/- 1.5 Hz from its normal value of 50 Hz over other fixed frequency filters is very well justified. A novel method of integrating rule-based system approach with linear FIR filter and also with Wave digital filter are proposed. The performances of Rule-based FIR filter and Rule-based Wave digital filter are compared with the LMS adaptive filter.

  15. Examining the Effect of Interference on Short-Term Memory Recall of Arabic Abstract and Concrete Words Using Free, Cued, and Serial Recall Paradigms

    ERIC Educational Resources Information Center

    Alduais, Ahmed Mohammed Saleh; Almukhaizeem, Yasir Saad

    2015-01-01

    Purpose: To see if there is a correlation between interference and short-term memory recall and to examine interference as a factor affecting memory recalling of Arabic and abstract words through free, cued, and serial recall tasks. Method: Four groups of undergraduates in King Saud University, Saudi Arabia participated in this study. The first…

  16. A consensus view of fold space: Combining SCOP, CATH, and the Dali Domain Dictionary

    PubMed Central

    Day, Ryan; Beck, David A.C.; Armen, Roger S.; Daggett, Valerie

    2003-01-01

    We have determined consensus protein-fold classifications on the basis of three classification methods, SCOP, CATH, and Dali. These classifications make use of different methods of defining and categorizing protein folds that lead to different views of protein-fold space. Pairwise comparisons of domains on the basis of their fold classifications show that much of the disagreement between the classification systems is due to differing domain definitions rather than assigning the same domain to different folds. However, there are significant differences in the fold assignments between the three systems. These remaining differences can be explained primarily in terms of the breadth of the fold classifications. Many structures may be defined as having one fold in one system, whereas far fewer are defined as having the analogous fold in another system. By comparing these folds for a nonredundant set of proteins, the consensus method breaks up broad fold classifications and combines restrictive fold classifications into metafolds, creating, in effect, an averaged view of fold space. This averaged view requires that the structural similarities between proteins having the same metafold be recognized by multiple classification systems. Thus, the consensus map is useful for researchers looking for fold similarities that are relatively independent of the method used to compare proteins. The 30 most populated metafolds, representing the folds of about half of a nonredundant subset of the PDB, are presented here. The full list of metafolds is presented on the Web. PMID:14500873

  17. A consensus view of fold space: combining SCOP, CATH, and the Dali Domain Dictionary.

    PubMed

    Day, Ryan; Beck, David A C; Armen, Roger S; Daggett, Valerie

    2003-10-01

    We have determined consensus protein-fold classifications on the basis of three classification methods, SCOP, CATH, and Dali. These classifications make use of different methods of defining and categorizing protein folds that lead to different views of protein-fold space. Pairwise comparisons of domains on the basis of their fold classifications show that much of the disagreement between the classification systems is due to differing domain definitions rather than assigning the same domain to different folds. However, there are significant differences in the fold assignments between the three systems. These remaining differences can be explained primarily in terms of the breadth of the fold classifications. Many structures may be defined as having one fold in one system, whereas far fewer are defined as having the analogous fold in another system. By comparing these folds for a nonredundant set of proteins, the consensus method breaks up broad fold classifications and combines restrictive fold classifications into metafolds, creating, in effect, an averaged view of fold space. This averaged view requires that the structural similarities between proteins having the same metafold be recognized by multiple classification systems. Thus, the consensus map is useful for researchers looking for fold similarities that are relatively independent of the method used to compare proteins. The 30 most populated metafolds, representing the folds of about half of a nonredundant subset of the PDB, are presented here. The full list of metafolds is presented on the Web.

  18. Sedimentation Velocity Analysis of Large Oligomeric Chromatin Complexes Using Interference Detection.

    PubMed

    Rogge, Ryan A; Hansen, Jeffrey C

    2015-01-01

    Sedimentation velocity experiments measure the transport of molecules in solution under centrifugal force. Here, we describe a method for monitoring the sedimentation of very large biological molecular assemblies using the interference optical systems of the analytical ultracentrifuge. The mass, partial-specific volume, and shape of macromolecules in solution affect their sedimentation rates as reflected in the sedimentation coefficient. The sedimentation coefficient is obtained by measuring the solute concentration as a function of radial distance during centrifugation. Monitoring the concentration can be accomplished using interference optics, absorbance optics, or the fluorescence detection system, each with inherent advantages. The interference optical system captures data much faster than these other optical systems, allowing for sedimentation velocity analysis of extremely large macromolecular complexes that sediment rapidly at very low rotor speeds. Supramolecular oligomeric complexes produced by self-association of 12-mer chromatin fibers are used to illustrate the advantages of the interference optics. Using interference optics, we show that chromatin fibers self-associate at physiological divalent salt concentrations to form structures that sediment between 10,000 and 350,000S. The method for characterizing chromatin oligomers described in this chapter will be generally useful for characterization of any biological structures that are too large to be studied by the absorbance optical system. © 2015 Elsevier Inc. All rights reserved.

  19. Prostate segmentation by sparse representation based classification.

    PubMed

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-10-01

    The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by the elastic net in order to obtain a smooth and clear prostate boundary in the classification result. (3) Residue-based linear regression is incorporated to improve the classification performance and to extend SRC from hard classification to soft classification. (4) Iterative SRC is proposed by using context information to iteratively refine the classification results. The proposed method has been comprehensively evaluated on a dataset consisting of 330 CT images from 24 patients. The effectiveness of the extended SRC has been validated by comparing it with the traditional SRC based on the proposed four extensions. The experimental results show that our extended SRC can obtain not only more accurate classification results but also smoother and clearer prostate boundary than the traditional SRC. Besides, the comparison with other five state-of-the-art prostate segmentation methods indicates that our method can achieve better performance than other methods under comparison. The authors have proposed a novel prostate segmentation method based on the sparse representation based classification, which can achieve considerably accurate segmentation results in CT prostate segmentation.

  20. Video based object representation and classification using multiple covariance matrices.

    PubMed

    Zhang, Yurong; Liu, Quan

    2017-01-01

    Video based object recognition and classification has been widely studied in computer vision and image processing area. One main issue of this task is to develop an effective representation for video. This problem can generally be formulated as image set representation. In this paper, we present a new method called Multiple Covariance Discriminative Learning (MCDL) for image set representation and classification problem. The core idea of MCDL is to represent an image set using multiple covariance matrices with each covariance matrix representing one cluster of images. Firstly, we use the Nonnegative Matrix Factorization (NMF) method to do image clustering within each image set, and then adopt Covariance Discriminative Learning on each cluster (subset) of images. At last, we adopt KLDA and nearest neighborhood classification method for image set classification. Promising experimental results on several datasets show the effectiveness of our MCDL method.

  1. A contour-based shape descriptor for biomedical image classification and retrieval

    NASA Astrophysics Data System (ADS)

    You, Daekeun; Antani, Sameer; Demner-Fushman, Dina; Thoma, George R.

    2013-12-01

    Contours, object blobs, and specific feature points are utilized to represent object shapes and extract shape descriptors that can then be used for object detection or image classification. In this research we develop a shape descriptor for biomedical image type (or, modality) classification. We adapt a feature extraction method used in optical character recognition (OCR) for character shape representation, and apply various image preprocessing methods to successfully adapt the method to our application. The proposed shape descriptor is applied to radiology images (e.g., MRI, CT, ultrasound, X-ray, etc.) to assess its usefulness for modality classification. In our experiment we compare our method with other visual descriptors such as CEDD, CLD, Tamura, and PHOG that extract color, texture, or shape information from images. The proposed method achieved the highest classification accuracy of 74.1% among all other individual descriptors in the test, and when combined with CSD (color structure descriptor) showed better performance (78.9%) than using the shape descriptor alone.

  2. [An object-based information extraction technology for dominant tree species group types].

    PubMed

    Tian, Tian; Fan, Wen-yi; Lu, Wei; Xiao, Xiang

    2015-06-01

    Information extraction for dominant tree group types is difficult in remote sensing image classification, howevers, the object-oriented classification method using high spatial resolution remote sensing data is a new method to realize the accurate type information extraction. In this paper, taking the Jiangle Forest Farm in Fujian Province as the research area, based on the Quickbird image data in 2013, the object-oriented method was adopted to identify the farmland, shrub-herbaceous plant, young afforested land, Pinus massoniana, Cunninghamia lanceolata and broad-leave tree types. Three types of classification factors including spectral, texture, and different vegetation indices were used to establish a class hierarchy. According to the different levels, membership functions and the decision tree classification rules were adopted. The results showed that the method based on the object-oriented method by using texture, spectrum and the vegetation indices achieved the classification accuracy of 91.3%, which was increased by 5.7% compared with that by only using the texture and spectrum.

  3. Convolutional neural network with transfer learning for rice type classification

    NASA Astrophysics Data System (ADS)

    Patel, Vaibhav Amit; Joshi, Manjunath V.

    2018-04-01

    Presently, rice type is identified manually by humans, which is time consuming and error prone. Therefore, there is a need to do this by machine which makes it faster with greater accuracy. This paper proposes a deep learning based method for classification of rice types. We propose two methods to classify the rice types. In the first method, we train a deep convolutional neural network (CNN) using the given segmented rice images. In the second method, we train a combination of a pretrained VGG16 network and the proposed method, while using transfer learning in which the weights of a pretrained network are used to achieve better accuracy. Our approach can also be used for classification of rice grain as broken or fine. We train a 5-class model for classifying rice types using 4000 training images and another 2- class model for the classification of broken and normal rice using 1600 training images. We observe that despite having distinct rice images, our architecture, pretrained on ImageNet data boosts classification accuracy significantly.

  4. Interferences in the direct quantification of bisphenol S in paper by means of thermochemolysis.

    PubMed

    Becerra, Valentina; Odermatt, Jürgen

    2013-02-01

    This article analyses the interferences in the quantification of traces of bisphenol S in paper by applying the direct analytical method "analytical pyrolysis gas chromatography mass spectrometry" (Py-GC/MS) in conjunction with on-line derivatisation with tetramethylammonium hydroxide (TMAH). As the analytes are simultaneously analysed with the matrix, the interferences derive from the matrix. The investigated interferences are found in the analysis of paper samples, which include bisphenol S derivative compounds. As the free bisphenol S is the hydrolysis product of the bisphenol S derivative compounds, the detected amount of bisphenol S in the sample may be overestimated. It is found that the formation of free bisphenol S from the bisphenol S derivative compounds is enhanced in the presence of tetramethylammonium hydroxide (TMAH) under pyrolytic conditions. In order to avoid the formation of bisphenol S trimethylsulphonium hydroxide (TMSH) is introduced. Different parameters are optimised in the development of the quantification method with TMSH. The quantification method based on TMSH thermochemolysis has been validated in terms of reproducibility and accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. A new algorithm for ECG interference removal from single channel EMG recording.

    PubMed

    Yazdani, Shayan; Azghani, Mahmood Reza; Sedaaghi, Mohammad Hossein

    2017-09-01

    This paper presents a new method to remove electrocardiogram (ECG) interference from electromyogram (EMG). This interference occurs during the EMG acquisition from trunk muscles. The proposed algorithm employs progressive image denoising (PID) algorithm and ensembles empirical mode decomposition (EEMD) to remove this type of interference. PID is a very recent method that is being used for denoising digital images mixed with white Gaussian noise. It detects white Gaussian noise by deterministic annealing. To the best of our knowledge, PID has never been used before, in the case of EMG and ECG separation or in other 1D signal denoising applications. We have used it according to this fact that amplitude of the EMG signal can be modeled as white Gaussian noise using a filter with time-variant properties. The proposed algorithm has been compared to the other well-known methods such as HPF, EEMD-ICA, Wavelet-ICA and PID. The results show that the proposed algorithm outperforms the others, on the basis of three evaluation criteria used in this paper: Normalized mean square error, Signal to noise ratio and Pearson correlation.

  6. 77 FR 22768 - Federal Acquisition Regulation; Information Collection; Freight Classification Description

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-17

    ...; Information Collection; Freight Classification Description AGENCIES: Department of Defense (DOD), General... collection requirement concerning freight classification description. Public comments are particularly... Information Collection 9000- 0055, Freight Classification Description, by any of the following methods...

  7. An efficient ensemble learning method for gene microarray classification.

    PubMed

    Osareh, Alireza; Shadgar, Bita

    2013-01-01

    The gene microarray analysis and classification have demonstrated an effective way for the effective diagnosis of diseases and cancers. However, it has been also revealed that the basic classification techniques have intrinsic drawbacks in achieving accurate gene classification and cancer diagnosis. On the other hand, classifier ensembles have received increasing attention in various applications. Here, we address the gene classification issue using RotBoost ensemble methodology. This method is a combination of Rotation Forest and AdaBoost techniques which in turn preserve both desirable features of an ensemble architecture, that is, accuracy and diversity. To select a concise subset of informative genes, 5 different feature selection algorithms are considered. To assess the efficiency of the RotBoost, other nonensemble/ensemble techniques including Decision Trees, Support Vector Machines, Rotation Forest, AdaBoost, and Bagging are also deployed. Experimental results have revealed that the combination of the fast correlation-based feature selection method with ICA-based RotBoost ensemble is highly effective for gene classification. In fact, the proposed method can create ensemble classifiers which outperform not only the classifiers produced by the conventional machine learning but also the classifiers generated by two widely used conventional ensemble learning methods, that is, Bagging and AdaBoost.

  8. Sentiment classification technology based on Markov logic networks

    NASA Astrophysics Data System (ADS)

    He, Hui; Li, Zhigang; Yao, Chongchong; Zhang, Weizhe

    2016-07-01

    With diverse online media emerging, there is a growing concern of sentiment classification problem. At present, text sentiment classification mainly utilizes supervised machine learning methods, which feature certain domain dependency. On the basis of Markov logic networks (MLNs), this study proposed a cross-domain multi-task text sentiment classification method rooted in transfer learning. Through many-to-one knowledge transfer, labeled text sentiment classification, knowledge was successfully transferred into other domains, and the precision of the sentiment classification analysis in the text tendency domain was improved. The experimental results revealed the following: (1) the model based on a MLN demonstrated higher precision than the single individual learning plan model. (2) Multi-task transfer learning based on Markov logical networks could acquire more knowledge than self-domain learning. The cross-domain text sentiment classification model could significantly improve the precision and efficiency of text sentiment classification.

  9. Multiple Spectral-Spatial Classification Approach for Hyperspectral Data

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.

    2010-01-01

    A .new multiple classifier approach for spectral-spatial classification of hyperspectral images is proposed. Several classifiers are used independently to classify an image. For every pixel, if all the classifiers have assigned this pixel to the same class, the pixel is kept as a marker, i.e., a seed of the spatial region, with the corresponding class label. We propose to use spectral-spatial classifiers at the preliminary step of the marker selection procedure, each of them combining the results of a pixel-wise classification and a segmentation map. Different segmentation methods based on dissimilar principles lead to different classification results. Furthermore, a minimum spanning forest is built, where each tree is rooted on a classification -driven marker and forms a region in the spectral -spatial classification: map. Experimental results are presented for two hyperspectral airborne images. The proposed method significantly improves classification accuracies, when compared to previously proposed classification techniques.

  10. Land Cover Analysis by Using Pixel-Based and Object-Based Image Classification Method in Bogor

    NASA Astrophysics Data System (ADS)

    Amalisana, Birohmatin; Rokhmatullah; Hernina, Revi

    2017-12-01

    The advantage of image classification is to provide earth’s surface information like landcover and time-series changes. Nowadays, pixel-based image classification technique is commonly performed with variety of algorithm such as minimum distance, parallelepiped, maximum likelihood, mahalanobis distance. On the other hand, landcover classification can also be acquired by using object-based image classification technique. In addition, object-based classification uses image segmentation from parameter such as scale, form, colour, smoothness and compactness. This research is aimed to compare the result of landcover classification and its change detection between parallelepiped pixel-based and object-based classification method. Location of this research is Bogor with 20 years range of observation from 1996 until 2016. This region is famous as urban areas which continuously change due to its rapid development, so that time-series landcover information of this region will be interesting.

  11. Rapid, simultaneous and interference-free determination of three rhodamine dyes illegally added into chilli samples using excitation-emission matrix fluorescence coupled with second-order calibration method.

    PubMed

    Chang, Yue-Yue; Wu, Hai-Long; Fang, Huan; Wang, Tong; Liu, Zhi; Ouyang, Yang-Zi; Ding, Yu-Jie; Yu, Ru-Qin

    2018-06-15

    In this study, a smart and green analytical method based on the second-order calibration algorithm coupled with excitation-emission matrix (EEM) fluorescence was developed for the determination of rhodamine dyes illegally added into chilli samples. The proposed method not only has the advantage of high sensitivity over the traditional fluorescence method but also fully displays the "second-order advantage". Pure signals of analytes were successfully extracted from severely interferential EEMs profiles via using alternating trilinear decomposition (ATLD) algorithm even in the presence of common fluorescence problems such as scattering, peak overlaps and unknown interferences. It is worth noting that the unknown interferents can denote different kinds of backgrounds, not only refer to a constant background. In addition, the method using interpolation method could avoid the information loss of analytes of interest. The use of "mathematical separation" instead of complicated "chemical or physical separation" strategy can be more effective and environmentally friendly. A series of statistical parameters including figures of merit and RSDs of intra- (≤1.9%) and inter-day (≤6.6%) were calculated to validate the accuracy of the proposed method. Furthermore, the authoritative method of HPLC-FLD was adopted to verify the qualitative and quantitative results of the proposed method. Compared with the two methods, it also showed that the ATLD-EEMs method has the advantages of accuracy, rapidness, simplicity and green, which is expected to be developed as an attractive alternative method for simultaneous and interference-free determination of rhodamine dyes illegally added into complex matrices. Copyright © 2018. Published by Elsevier B.V.

  12. Isotope pattern deconvolution for peptide mass spectrometry by non-negative least squares/least absolute deviation template matching

    PubMed Central

    2012-01-01

    Background The robust identification of isotope patterns originating from peptides being analyzed through mass spectrometry (MS) is often significantly hampered by noise artifacts and the interference of overlapping patterns arising e.g. from post-translational modifications. As the classification of the recorded data points into either ‘noise’ or ‘signal’ lies at the very root of essentially every proteomic application, the quality of the automated processing of mass spectra can significantly influence the way the data might be interpreted within a given biological context. Results We propose non-negative least squares/non-negative least absolute deviation regression to fit a raw spectrum by templates imitating isotope patterns. In a carefully designed validation scheme, we show that the method exhibits excellent performance in pattern picking. It is demonstrated that the method is able to disentangle complicated overlaps of patterns. Conclusions We find that regularization is not necessary to prevent overfitting and that thresholding is an effective and user-friendly way to perform feature selection. The proposed method avoids problems inherent in regularization-based approaches, comes with a set of well-interpretable parameters whose default configuration is shown to generalize well without the need for fine-tuning, and is applicable to spectra of different platforms. The R package IPPD implements the method and is available from the Bioconductor platform (http://bioconductor.fhcrc.org/help/bioc-views/devel/bioc/html/IPPD.html). PMID:23137144

  13. Hyperspectral laser-induced autofluorescence imaging of dental caries

    NASA Astrophysics Data System (ADS)

    Bürmen, Miran; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan

    2012-01-01

    Dental caries is a disease characterized by demineralization of enamel crystals leading to the penetration of bacteria into the dentine and pulp. Early detection of enamel demineralization resulting in increased enamel porosity, commonly known as white spots, is a difficult diagnostic task. Laser induced autofluorescence was shown to be a useful method for early detection of demineralization. The existing studies involved either a single point spectroscopic measurements or imaging at a single spectral band. In the case of spectroscopic measurements, very little or no spatial information is acquired and the measured autofluorescence signal strongly depends on the position and orientation of the probe. On the other hand, single-band spectral imaging can be substantially affected by local spectral artefacts. Such effects can significantly interfere with automated methods for detection of early caries lesions. In contrast, hyperspectral imaging effectively combines the spatial information of imaging methods with the spectral information of spectroscopic methods providing excellent basis for development of robust and reliable algorithms for automated classification and analysis of hard dental tissues. In this paper, we employ 405 nm laser excitation of natural caries lesions. The fluorescence signal is acquired by a state-of-the-art hyperspectral imaging system consisting of a high-resolution acousto-optic tunable filter (AOTF) and a highly sensitive Scientific CMOS camera in the spectral range from 550 nm to 800 nm. The results are compared to the contrast obtained by near-infrared hyperspectral imaging technique employed in the existing studies on early detection of dental caries.

  14. Evaluation of machine learning algorithms for classification of primary biological aerosol using a new UV-LIF spectrometer

    NASA Astrophysics Data System (ADS)

    Ruske, Simon; Topping, David O.; Foot, Virginia E.; Kaye, Paul H.; Stanley, Warren R.; Crawford, Ian; Morse, Andrew P.; Gallagher, Martin W.

    2017-03-01

    Characterisation of bioaerosols has important implications within environment and public health sectors. Recent developments in ultraviolet light-induced fluorescence (UV-LIF) detectors such as the Wideband Integrated Bioaerosol Spectrometer (WIBS) and the newly introduced Multiparameter Bioaerosol Spectrometer (MBS) have allowed for the real-time collection of fluorescence, size and morphology measurements for the purpose of discriminating between bacteria, fungal spores and pollen.This new generation of instruments has enabled ever larger data sets to be compiled with the aim of studying more complex environments. In real world data sets, particularly those from an urban environment, the population may be dominated by non-biological fluorescent interferents, bringing into question the accuracy of measurements of quantities such as concentrations. It is therefore imperative that we validate the performance of different algorithms which can be used for the task of classification.For unsupervised learning we tested hierarchical agglomerative clustering with various different linkages. For supervised learning, 11 methods were tested, including decision trees, ensemble methods (random forests, gradient boosting and AdaBoost), two implementations for support vector machines (libsvm and liblinear) and Gaussian methods (Gaussian naïve Bayesian, quadratic and linear discriminant analysis, the k-nearest neighbours algorithm and artificial neural networks).The methods were applied to two different data sets produced using the new MBS, which provides multichannel UV-LIF fluorescence signatures for single airborne biological particles. The first data set contained mixed PSLs and the second contained a variety of laboratory-generated aerosol.Clustering in general performs slightly worse than the supervised learning methods, correctly classifying, at best, only 67. 6 and 91. 1 % for the two data sets respectively. For supervised learning the gradient boosting algorithm was found to be the most effective, on average correctly classifying 82. 8 and 98. 27 % of the testing data, respectively, across the two data sets.A possible alternative to gradient boosting is neural networks. We do however note that this method requires much more user input than the other methods, and we suggest that further research should be conducted using this method, especially using parallelised hardware such as the GPU, which would allow for larger networks to be trained, which could possibly yield better results.We also saw that some methods, such as clustering, failed to utilise the additional shape information provided by the instrument, whilst for others, such as the decision trees, ensemble methods and neural networks, improved performance could be attained with the inclusion of such information.

  15. Automated Terrestrial EMI Emitter Detection, Classification, and Localization

    NASA Astrophysics Data System (ADS)

    Stottler, R.; Ong, J.; Gioia, C.; Bowman, C.; Bhopale, A.

    Clear operating spectrum at ground station antenna locations is critically important for communicating with, commanding, controlling, and maintaining the health of satellites. Electro Magnetic Interference (EMI) can interfere with these communications, so it is extremely important to track down and eliminate sources of EMI. The Terrestrial RFI-locating Automation with CasE based Reasoning (TRACER) system is being implemented to automate terrestrial EMI emitter localization and identification to improve space situational awareness, reduce manpower requirements, dramatically shorten EMI response time, enable the system to evolve without programmer involvement, and support adversarial scenarios such as jamming. The operational version of TRACER is being implemented and applied with real data (power versus frequency over time) for both satellite communication antennas and sweeping Direction Finding (DF) antennas located near them. This paper presents the design and initial implementation of TRACER’s investigation data management, automation, and data visualization capabilities. TRACER monitors DF antenna signals and detects and classifies EMI using neural network technology, trained on past cases of both normal communications and EMI events. When EMI events are detected, an Investigation Object is created automatically. The user interface facilitates the management of multiple investigations simultaneously. Using a variant of the Friis transmission equation, emissions data is used to estimate and plot the emitter’s locations over time for comparison with current flights. The data is also displayed on a set of five linked graphs to aid in the perception of patterns spanning power, time, frequency, and bearing. Based on details of the signal (its classification, direction, and strength, etc.), TRACER retrieves one or more cases of EMI investigation methodologies which are represented as graphical behavior transition networks (BTNs). These BTNs can be edited easily, and they naturally represent the flow-chart-like process often followed by experts in time pressured situations.

  16. Advanced Steel Microstructural Classification by Deep Learning Methods.

    PubMed

    Azimi, Seyed Majid; Britz, Dominik; Engstler, Michael; Fritz, Mario; Mücklich, Frank

    2018-02-01

    The inner structure of a material is called microstructure. It stores the genesis of a material and determines all its physical and chemical properties. While microstructural characterization is widely spread and well known, the microstructural classification is mostly done manually by human experts, which gives rise to uncertainties due to subjectivity. Since the microstructure could be a combination of different phases or constituents with complex substructures its automatic classification is very challenging and only a few prior studies exist. Prior works focused on designed and engineered features by experts and classified microstructures separately from the feature extraction step. Recently, Deep Learning methods have shown strong performance in vision applications by learning the features from data together with the classification step. In this work, we propose a Deep Learning method for microstructural classification in the examples of certain microstructural constituents of low carbon steel. This novel method employs pixel-wise segmentation via Fully Convolutional Neural Network (FCNN) accompanied by a max-voting scheme. Our system achieves 93.94% classification accuracy, drastically outperforming the state-of-the-art method of 48.89% accuracy. Beyond the strong performance of our method, this line of research offers a more robust and first of all objective way for the difficult task of steel quality appreciation.

  17. Fingerprint extraction from interference destruction terahertz spectrum.

    PubMed

    Xiong, Wei; Shen, Jingling

    2010-10-11

    In this paper, periodic peaks in a terahertz absorption spectrum are confirmed to be induced from interference effects. Theoretically, we explained the periodic peaks and calculated the locations of them. Accordingly, a technique was suggested, with which the interference peaks in a terahertz spectrum can be eliminated and therefore a real terahertz absorption spectrum can be obtained. Experimentally, a sample, Methamphetamine, was investigated and its terahertz fingerprint was successfully extracted from its interference destruction spectrum. This technique is useful in getting samples' terahertz fingerprint spectra, and furthermore provides a fast nondestructive testing method using a large size terahertz beam to identify materials.

  18. Two-Wavelength Multi-Gigahertz Frequency Comb-Based Interferometry for Full-Field Profilometry

    NASA Astrophysics Data System (ADS)

    Choi, Samuel; Kashiwagi, Ken; Kojima, Shuto; Kasuya, Yosuke; Kurokawa, Takashi

    2013-10-01

    The multi-gigahertz frequency comb-based interferometer exhibits only the interference amplitude peak without the phase fringes, which can produce a rapid axial scan for full-field profilometry and tomography. Despite huge technical advantages, there remain problems that the interference intensity undulations occurred depending on the interference phase. To avoid such problems, we propose a compensation technique of the interference signals using two frequency combs with slightly varied center wavelengths. The compensated full-field surface profile measurements of cover glass and onion skin were demonstrated experimentally to verify the advantages of the proposed method.

  19. Method for Estimating the Sonic-Boom Characteristics of Lifting Canard-Wing Aircraft Concepts

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.

    2005-01-01

    A method for estimating the sonic-boom overpressures from a conceptual aircraft where the lift is carried by both a canard and a wing during supersonic cruise is presented and discussed. Computer codes used for the prediction of the aerodynamic performance of the wing, the canard-wing interference, the nacelle-wing interference, and the sonic-boom overpressures are identified and discussed as the procedures in the method are discussed. A canard-wing supersonic-cruise concept was used as an example to demonstrate the application of the method.

  20. Comparison of transect sampling and object-oriented image classification methods of urbanizing catchments

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Tenenbaum, D. E.

    2009-12-01

    The process of urbanization has major effects on both human and natural systems. In order to monitor these changes and better understand how urban ecological systems work, urban spatial structure and the variation needs to be first quantified at a fine scale. Because the land-use and land-cover (LULC) in urbanizing areas is highly heterogeneous, the classification of urbanizing environments is the most challenging field in remote sensing. Although a pixel-based method is a common way to do classification, the results are not good enough for many research objectives which require more accurate classification data in fine scales. Transect sampling and object-oriented classification methods are more appropriate for urbanizing areas. Tenenbaum used a transect sampling method using a computer-based facility within a widely available commercial GIS in the Glyndon Catchment and the Upper Baismans Run Catchment, Baltimore, Maryland. It was a two-tiered classification system, including a primary level (which includes 7 classes) and a secondary level (which includes 37 categories). The statistical information of LULC was collected. W. Zhou applied an object-oriented method at the parcel level in Gwynn’s Falls Watershed which includes the two previously mentioned catchments and six classes were extracted. The two urbanizing catchments are located in greater Baltimore, Maryland and drain into Chesapeake Bay. In this research, the two different methods are compared for 6 classes (woody, herbaceous, water, ground, pavement and structure). The comparison method uses the segments in the transect method to extract LULC information from the results of the object-oriented method. Classification results were compared in order to evaluate the difference between the two methods. The overall proportions of LULC classes from the two studies show that there is overestimation of structures in the object-oriented method. For the other five classes, the results from the two methods are similar, except for a difference in the proportions of the woody class. The segment to segment comparison shows that the resolution of the light detection and ranging (LIDAR) data used in the object-oriented method does affect the accuracy of the classification. Shadows of trees and structures are still a big problem in the object-oriented method. For classes that make up a small proportion of the catchments, such as water, neither method was capable of detecting them.

  1. D-METHIONINE REDUCES TOBRAMYCIN-INDUCED OTOTOXICITY WITHOUT ANTIMICROBIAL INTERFERENCE IN ANIMAL MODELS

    PubMed Central

    Fox, Daniel J.; Cooper, Morris D.; Speil, Cristian A.; Roberts, Melissa H.; Yanik, Susan C.; Meech, Robert P.; Hargrove, Tim L.; Verhulst, Steven J.; Rybak, Leonard P.; Campbell, Kathleen C. M.

    2015-01-01

    Background Tobramycin is a critical cystic fibrosis treatment however it causes ototoxicity. This study tested D-methionine protection from tobramycin-induced ototoxicity and potential antimicrobial interference. Methods Auditory brainstem responses (ABR) and outer hair cell (OHC) quantifications measured protection in guinea pigs treated with tobramycin and a range of D-methionine doses. In vitro antimicrobial interference studies tested inhibition and post antibiotic effect assays. In vivo antimicrobial interference studies tested normal and neutropenic E. coli murine survival and intraperitoneal lavage bacterial counts. Results D-methionine conferred significant ABR threshold shift reductions. OHC protection was less robust but significant at 20 kHz in the 420 mg/kg/day group. In vitro studies did not detect D-methionine-induced antimicrobial interference. In vivo studies did not detect D-methionine-induced interference in normal or neutropenic mice. Conclusions D-methionine protects from tobramycin-induced ototoxicity without antimicrobial interference. The study results suggest D-met as a potential otoprotectant from clinical tobramycin use in cystic fibrosis patients. PMID:26166286

  2. Gold in natural water: A method of determination by solvent extraction and electrothermal atomization

    USGS Publications Warehouse

    McHugh, J.B.

    1984-01-01

    A method has been developed using electrothermal atomization to effectively determine the amount of gold in natural water within the nanogram range. The method has four basic steps: (1) evaporating a 1-L sample; (2) putting it in hydrobromic acid-bromine solution; (3) extracting the sample with methyl-isobutyl-ketone; and (4) determining the amount of gold using an atomic absorption spectrophotometer. The limit of detection is 0.001 ??g gold per liter. Results from three studies indicate, respectively, that the method is precise, effective, and free of interference. Specifically, a precision study indicates that the method has a relative standard deviation of 16-18%; a recovery study indicates that the method recovers gold at an average of 93%; and an interference study indicates that the interference effects are eliminated with solvent extraction and background correction techniques. Application of the method to water samples collected from 41 sites throughout the Western United States and Alaska shows a gold concentration range of < 0.001 to 0.036 ??g gold per liter, with an average of 0.005 ??g/L. ?? 1984.

  3. F-16 Instructional Sequencing Plan Report.

    DTIC Science & Technology

    1981-03-01

    information). 2. Interference (learning of some tasks interferes with the learning of other tasks when they possess similar but confusing differences ...profound effect on the total training expense. This increases the desirability of systematic, precise methods of syllabus generation. Inherent in a given...the expensive to acquire. resource. Least cost The syllabus must Select sequences which provide a least total make maximum use of cost method of

  4. Broadband Time-Frequency Analysis Using a Multicomputer

    DTIC Science & Technology

    2004-09-30

    FFT 512 pt Waterfall WVD display 8© 2004 Mercury Computer Systems, Inc. Smoothed Pseudo Wigner - Ville Distribution One of many interference reduction...The Wigner - Ville distribution , the scalogram, and the discrete Gabor transform are among the most well-known of these methods. Due to specific...based upon FFT Accumulation Method • Continuous Wavelet Transform (Scalogram) • Discrete Wigner - Ville Distribution with a selected set of interference

  5. Releasing-addition method for the flame-photometric determination of calcium in thermal waters

    USGS Publications Warehouse

    Rowe, J.J.

    1963-01-01

    Study of the interferences of silica and sulfate in the flame-photometric determination of calcium in thermal waters has led to the development of a method requiring no prior chemical separations. The interference effects of silica, sulfate, potassium, sodium, aluminum, and phosphate are overcome by an addition technique coupled with the use of magnesium as a releasing agent. ?? 1963.

  6. Verifying entanglement in the Hong-Ou-Mandel dip

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ray, Megan R.; Enk, S. J. van

    2011-04-15

    The Hong-Ou-Mandel interference dip is caused by an entangled state, a delocalized biphoton state. We propose a method of detecting this entanglement by utilizing inverse Hong-Ou-Mandel interference, while taking into account vacuum and multiphoton contaminations, phase noise, and other imperfections. The method uses just linear optics and photodetectors, and for single-mode photodetectors we find a lower bound on the amount of entanglement.

  7. Application of different classification methods for litho-fluid facies prediction: a case study from the offshore Nile Delta

    NASA Astrophysics Data System (ADS)

    Aleardi, Mattia; Ciabarri, Fabio

    2017-10-01

    In this work we test four classification methods for litho-fluid facies identification in a clastic reservoir located in the offshore Nile Delta. The ultimate goal of this study is to find an optimal classification method for the area under examination. The geologic context of the investigated area allows us to consider three different facies in the classification: shales, brine sands and gas sands. The depth at which the reservoir zone is located (2300-2700 m) produces a significant overlap of the P- and S-wave impedances of brine sands and gas sands that makes discrimination between these two litho-fluid classes particularly problematic. The classification is performed on the feature space defined by the elastic properties that are derived from recorded reflection seismic data by means of amplitude versus angle Bayesian inversion. As classification methods we test both deterministic and probabilistic approaches: the quadratic discriminant analysis and the neural network methods belong to the first group, whereas the standard Bayesian approach and the Bayesian approach that includes a 1D Markov chain a priori model to constrain the vertical continuity of litho-fluid facies belong to the second group. The ability of each method to discriminate the different facies is evaluated both on synthetic seismic data (computed on the basis of available borehole information) and on field seismic data. The outcomes of each classification method are compared with the known facies profile derived from well log data and the goodness of the results is quantitatively evaluated using the so-called confusion matrix. The results show that all methods return vertical facies profiles in which the main reservoir zone is correctly identified. However, the consideration of as much prior information as possible in the classification process is the winning choice for deriving a reliable and physically plausible predicted facies profile.

  8. Automatic migraine classification via feature selection committee and machine learning techniques over imaging and questionnaire data.

    PubMed

    Garcia-Chimeno, Yolanda; Garcia-Zapirain, Begonya; Gomez-Beldarrain, Marian; Fernandez-Ruanova, Begonya; Garcia-Monco, Juan Carlos

    2017-04-13

    Feature selection methods are commonly used to identify subsets of relevant features to facilitate the construction of models for classification, yet little is known about how feature selection methods perform in diffusion tensor images (DTIs). In this study, feature selection and machine learning classification methods were tested for the purpose of automating diagnosis of migraines using both DTIs and questionnaire answers related to emotion and cognition - factors that influence of pain perceptions. We select 52 adult subjects for the study divided into three groups: control group (15), subjects with sporadic migraine (19) and subjects with chronic migraine and medication overuse (18). These subjects underwent magnetic resonance with diffusion tensor to see white matter pathway integrity of the regions of interest involved in pain and emotion. The tests also gather data about pathology. The DTI images and test results were then introduced into feature selection algorithms (Gradient Tree Boosting, L1-based, Random Forest and Univariate) to reduce features of the first dataset and classification algorithms (SVM (Support Vector Machine), Boosting (Adaboost) and Naive Bayes) to perform a classification of migraine group. Moreover we implement a committee method to improve the classification accuracy based on feature selection algorithms. When classifying the migraine group, the greatest improvements in accuracy were made using the proposed committee-based feature selection method. Using this approach, the accuracy of classification into three types improved from 67 to 93% when using the Naive Bayes classifier, from 90 to 95% with the support vector machine classifier, 93 to 94% in boosting. The features that were determined to be most useful for classification included are related with the pain, analgesics and left uncinate brain (connected with the pain and emotions). The proposed feature selection committee method improved the performance of migraine diagnosis classifiers compared to individual feature selection methods, producing a robust system that achieved over 90% accuracy in all classifiers. The results suggest that the proposed methods can be used to support specialists in the classification of migraines in patients undergoing magnetic resonance imaging.

  9. Multivariate classification of the infrared spectra of cell and tissue samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haaland, D.M.; Jones, H.D.; Thomas, E.V.

    1997-03-01

    Infrared microspectroscopy of biopsied canine lymph cells and tissue was performed to investigate the possibility of using IR spectra coupled with multivariate classification methods to classify the samples as normal, hyperplastic, or neoplastic (malignant). IR spectra were obtained in transmission mode through BaF{sub 2} windows and in reflection mode from samples prepared on gold-coated microscope slides. Cytology and histopathology samples were prepared by a variety of methods to identify the optimal methods of sample preparation. Cytospinning procedures that yielded a monolayer of cells on the BaF{sub 2} windows produced a limited set of IR transmission spectra. These transmission spectra weremore » converted to absorbance and formed the basis for a classification rule that yielded 100{percent} correct classification in a cross-validated context. Classifications of normal, hyperplastic, and neoplastic cell sample spectra were achieved by using both partial least-squares (PLS) and principal component regression (PCR) classification methods. Linear discriminant analysis applied to principal components obtained from the spectral data yielded a small number of misclassifications. PLS weight loading vectors yield valuable qualitative insight into the molecular changes that are responsible for the success of the infrared classification. These successful classification results show promise for assisting pathologists in the diagnosis of cell types and offer future potential for {ital in vivo} IR detection of some types of cancer. {copyright} {ital 1997} {ital Society for Applied Spectroscopy}« less

  10. Graph-Based Semi-Supervised Hyperspectral Image Classification Using Spatial Information

    NASA Astrophysics Data System (ADS)

    Jamshidpour, N.; Homayouni, S.; Safari, A.

    2017-09-01

    Hyperspectral image classification has been one of the most popular research areas in the remote sensing community in the past decades. However, there are still some problems that need specific attentions. For example, the lack of enough labeled samples and the high dimensionality problem are two most important issues which degrade the performance of supervised classification dramatically. The main idea of semi-supervised learning is to overcome these issues by the contribution of unlabeled samples, which are available in an enormous amount. In this paper, we propose a graph-based semi-supervised classification method, which uses both spectral and spatial information for hyperspectral image classification. More specifically, two graphs were designed and constructed in order to exploit the relationship among pixels in spectral and spatial spaces respectively. Then, the Laplacians of both graphs were merged to form a weighted joint graph. The experiments were carried out on two different benchmark hyperspectral data sets. The proposed method performed significantly better than the well-known supervised classification methods, such as SVM. The assessments consisted of both accuracy and homogeneity analyses of the produced classification maps. The proposed spectral-spatial SSL method considerably increased the classification accuracy when the labeled training data set is too scarce.When there were only five labeled samples for each class, the performance improved 5.92% and 10.76% compared to spatial graph-based SSL, for AVIRIS Indian Pine and Pavia University data sets respectively.

  11. Cupping artifact correction and automated classification for high-resolution dedicated breast CT images.

    PubMed

    Yang, Xiaofeng; Wu, Shengyong; Sechopoulos, Ioannis; Fei, Baowei

    2012-10-01

    To develop and test an automated algorithm to classify the different tissues present in dedicated breast CT images. The original CT images are first corrected to overcome cupping artifacts, and then a multiscale bilateral filter is used to reduce noise while keeping edge information on the images. As skin and glandular tissues have similar CT values on breast CT images, morphologic processing is used to identify the skin mask based on its position information. A modified fuzzy C-means (FCM) classification method is then used to classify breast tissue as fat and glandular tissue. By combining the results of the skin mask with the FCM, the breast tissue is classified as skin, fat, and glandular tissue. To evaluate the authors' classification method, the authors use Dice overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on eight patient images. The correction method was able to correct the cupping artifacts and improve the quality of the breast CT images. For glandular tissue, the overlap ratios between the authors' automatic classification and manual segmentation were 91.6% ± 2.0%. A cupping artifact correction method and an automatic classification method were applied and evaluated for high-resolution dedicated breast CT images. Breast tissue classification can provide quantitative measurements regarding breast composition, density, and tissue distribution.

  12. Automatic sleep stage classification of single-channel EEG by using complex-valued convolutional neural network.

    PubMed

    Zhang, Junming; Wu, Yan

    2018-03-28

    Many systems are developed for automatic sleep stage classification. However, nearly all models are based on handcrafted features. Because of the large feature space, there are so many features that feature selection should be used. Meanwhile, designing handcrafted features is a difficult and time-consuming task because the feature designing needs domain knowledge of experienced experts. Results vary when different sets of features are chosen to identify sleep stages. Additionally, many features that we may be unaware of exist. However, these features may be important for sleep stage classification. Therefore, a new sleep stage classification system, which is based on the complex-valued convolutional neural network (CCNN), is proposed in this study. Unlike the existing sleep stage methods, our method can automatically extract features from raw electroencephalography data and then classify sleep stage based on the learned features. Additionally, we also prove that the decision boundaries for the real and imaginary parts of a complex-valued convolutional neuron intersect orthogonally. The classification performances of handcrafted features are compared with those of learned features via CCNN. Experimental results show that the proposed method is comparable to the existing methods. CCNN obtains a better classification performance and considerably faster convergence speed than convolutional neural network. Experimental results also show that the proposed method is a useful decision-support tool for automatic sleep stage classification.

  13. Optimizing taxonomic classification of marker-gene amplicon sequences with QIIME 2's q2-feature-classifier plugin.

    PubMed

    Bokulich, Nicholas A; Kaehler, Benjamin D; Rideout, Jai Ram; Dillon, Matthew; Bolyen, Evan; Knight, Rob; Huttley, Gavin A; Gregory Caporaso, J

    2018-05-17

    Taxonomic classification of marker-gene sequences is an important step in microbiome analysis. We present q2-feature-classifier ( https://github.com/qiime2/q2-feature-classifier ), a QIIME 2 plugin containing several novel machine-learning and alignment-based methods for taxonomy classification. We evaluated and optimized several commonly used classification methods implemented in QIIME 1 (RDP, BLAST, UCLUST, and SortMeRNA) and several new methods implemented in QIIME 2 (a scikit-learn naive Bayes machine-learning classifier, and alignment-based taxonomy consensus methods based on VSEARCH, and BLAST+) for classification of bacterial 16S rRNA and fungal ITS marker-gene amplicon sequence data. The naive-Bayes, BLAST+-based, and VSEARCH-based classifiers implemented in QIIME 2 meet or exceed the species-level accuracy of other commonly used methods designed for classification of marker gene sequences that were evaluated in this work. These evaluations, based on 19 mock communities and error-free sequence simulations, including classification of simulated "novel" marker-gene sequences, are available in our extensible benchmarking framework, tax-credit ( https://github.com/caporaso-lab/tax-credit-data ). Our results illustrate the importance of parameter tuning for optimizing classifier performance, and we make recommendations regarding parameter choices for these classifiers under a range of standard operating conditions. q2-feature-classifier and tax-credit are both free, open-source, BSD-licensed packages available on GitHub.

  14. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands

    PubMed Central

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too. PMID:27656140

  15. Deep Learning with Convolutional Neural Networks Applied to Electromyography Data: A Resource for the Classification of Movements for Prosthetic Hands.

    PubMed

    Atzori, Manfredo; Cognolato, Matteo; Müller, Henning

    2016-01-01

    Natural control methods based on surface electromyography (sEMG) and pattern recognition are promising for hand prosthetics. However, the control robustness offered by scientific research is still not sufficient for many real life applications, and commercial prostheses are capable of offering natural control for only a few movements. In recent years deep learning revolutionized several fields of machine learning, including computer vision and speech recognition. Our objective is to test its methods for natural control of robotic hands via sEMG using a large number of intact subjects and amputees. We tested convolutional networks for the classification of an average of 50 hand movements in 67 intact subjects and 11 transradial amputees. The simple architecture of the neural network allowed to make several tests in order to evaluate the effect of pre-processing, layer architecture, data augmentation and optimization. The classification results are compared with a set of classical classification methods applied on the same datasets. The classification accuracy obtained with convolutional neural networks using the proposed architecture is higher than the average results obtained with the classical classification methods, but lower than the results obtained with the best reference methods in our tests. The results show that convolutional neural networks with a very simple architecture can produce accurate results comparable to the average classical classification methods. They show that several factors (including pre-processing, the architecture of the net and the optimization parameters) can be fundamental for the analysis of sEMG data. Larger networks can achieve higher accuracy on computer vision and object recognition tasks. This fact suggests that it may be interesting to evaluate if larger networks can increase sEMG classification accuracy too.

  16. Controlling a human-computer interface system with a novel classification method that uses electrooculography signals.

    PubMed

    Wu, Shang-Lin; Liao, Lun-De; Lu, Shao-Wei; Jiang, Wei-Ling; Chen, Shi-An; Lin, Chin-Teng

    2013-08-01

    Electrooculography (EOG) signals can be used to control human-computer interface (HCI) systems, if properly classified. The ability to measure and process these signals may help HCI users to overcome many of the physical limitations and inconveniences in daily life. However, there are currently no effective multidirectional classification methods for monitoring eye movements. Here, we describe a classification method used in a wireless EOG-based HCI device for detecting eye movements in eight directions. This device includes wireless EOG signal acquisition components, wet electrodes and an EOG signal classification algorithm. The EOG classification algorithm is based on extracting features from the electrical signals corresponding to eight directions of eye movement (up, down, left, right, up-left, down-left, up-right, and down-right) and blinking. The recognition and processing of these eight different features were achieved in real-life conditions, demonstrating that this device can reliably measure the features of EOG signals. This system and its classification procedure provide an effective method for identifying eye movements. Additionally, it may be applied to study eye functions in real-life conditions in the near future.

  17. Strength Analysis on Ship Ladder Using Finite Element Method

    NASA Astrophysics Data System (ADS)

    Budianto; Wahyudi, M. T.; Dinata, U.; Ruddianto; Eko P., M. M.

    2018-01-01

    In designing the ship’s structure, it should refer to the rules in accordance with applicable classification standards. In this case, designing Ladder (Staircase) on a Ferry Ship which is set up, it must be reviewed based on the loads during ship operations, either during sailing or at port operations. The classification rules in ship design refer to the calculation of the structure components described in Classification calculation method and can be analysed using the Finite Element Method. Classification Regulations used in the design of Ferry Ships used BKI (Bureau of Classification Indonesia). So the rules for the provision of material composition in the mechanical properties of the material should refer to the classification of the used vessel. The analysis in this structure used program structure packages based on Finite Element Method. By using structural analysis on Ladder (Ladder), it obtained strength and simulation structure that can withstand load 140 kg both in static condition, dynamic, and impact. Therefore, the result of the analysis included values of safety factors in the ship is to keep the structure safe but the strength of the structure is not excessive.

  18. New KF-PP-SVM classification method for EEG in brain-computer interfaces.

    PubMed

    Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian

    2014-01-01

    Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.

  19. An application to pulmonary emphysema classification based on model of texton learning by sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryojiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2012-03-01

    We aim at using a new texton based texture classification method in the classification of pulmonary emphysema in computed tomography (CT) images of the lungs. Different from conventional computer-aided diagnosis (CAD) pulmonary emphysema classification methods, in this paper, firstly, the dictionary of texton is learned via applying sparse representation(SR) to image patches in the training dataset. Then the SR coefficients of the test images over the dictionary are used to construct the histograms for texture presentations. Finally, classification is performed by using a nearest neighbor classifier with a histogram dissimilarity measure as distance. The proposed approach is tested on 3840 annotated regions of interest consisting of normal tissue and mild, moderate and severe pulmonary emphysema of three subtypes. The performance of the proposed system, with an accuracy of about 88%, is comparably higher than state of the art method based on the basic rotation invariant local binary pattern histograms and the texture classification method based on texton learning by k-means, which performs almost the best among other approaches in the literature.

  20. Inductive interference in rapid transit signaling systems. volume 2. suggested test procedures.

    DOT National Transportation Integrated Search

    1987-03-31

    These suggested test procedures have been prepared in order to develop standard methods of analysis and testing to quantify and resolve issues of electromagnetic compatibility in rail transit operations. Electromagnetic interference, generated by rai...

  1. Unsupervised semantic indoor scene classification for robot vision based on context of features using Gist and HSV-SIFT

    NASA Astrophysics Data System (ADS)

    Madokoro, H.; Yamanashi, A.; Sato, K.

    2013-08-01

    This paper presents an unsupervised scene classification method for actualizing semantic recognition of indoor scenes. Background and foreground features are respectively extracted using Gist and color scale-invariant feature transform (SIFT) as feature representations based on context. We used hue, saturation, and value SIFT (HSV-SIFT) because of its simple algorithm with low calculation costs. Our method creates bags of features for voting visual words created from both feature descriptors to a two-dimensional histogram. Moreover, our method generates labels as candidates of categories for time-series images while maintaining stability and plasticity together. Automatic labeling of category maps can be realized using labels created using adaptive resonance theory (ART) as teaching signals for counter propagation networks (CPNs). We evaluated our method for semantic scene classification using KTH's image database for robot localization (KTH-IDOL), which is popularly used for robot localization and navigation. The mean classification accuracies of Gist, gray SIFT, one class support vector machines (OC-SVM), position-invariant robust features (PIRF), and our method are, respectively, 39.7, 58.0, 56.0, 63.6, and 79.4%. The result of our method is 15.8% higher than that of PIRF. Moreover, we applied our method for fine classification using our original mobile robot. We obtained mean classification accuracy of 83.2% for six zones.

  2. Automated diagnosis of myositis from muscle ultrasound: Exploring the use of machine learning and deep learning methods

    PubMed Central

    Burlina, Philippe; Billings, Seth; Joshi, Neil

    2017-01-01

    Objective To evaluate the use of ultrasound coupled with machine learning (ML) and deep learning (DL) techniques for automated or semi-automated classification of myositis. Methods Eighty subjects comprised of 19 with inclusion body myositis (IBM), 14 with polymyositis (PM), 14 with dermatomyositis (DM), and 33 normal (N) subjects were included in this study, where 3214 muscle ultrasound images of 7 muscles (observed bilaterally) were acquired. We considered three problems of classification including (A) normal vs. affected (DM, PM, IBM); (B) normal vs. IBM patients; and (C) IBM vs. other types of myositis (DM or PM). We studied the use of an automated DL method using deep convolutional neural networks (DL-DCNNs) for diagnostic classification and compared it with a semi-automated conventional ML method based on random forests (ML-RF) and “engineered” features. We used the known clinical diagnosis as the gold standard for evaluating performance of muscle classification. Results The performance of the DL-DCNN method resulted in accuracies ± standard deviation of 76.2% ± 3.1% for problem (A), 86.6% ± 2.4% for (B) and 74.8% ± 3.9% for (C), while the ML-RF method led to accuracies of 72.3% ± 3.3% for problem (A), 84.3% ± 2.3% for (B) and 68.9% ± 2.5% for (C). Conclusions This study demonstrates the application of machine learning methods for automatically or semi-automatically classifying inflammatory muscle disease using muscle ultrasound. Compared to the conventional random forest machine learning method used here, which has the drawback of requiring manual delineation of muscle/fat boundaries, DCNN-based classification by and large improved the accuracies in all classification problems while providing a fully automated approach to classification. PMID:28854220

  3. Fast-HPLC Fingerprinting to Discriminate Olive Oil from Other Edible Vegetable Oils by Multivariate Classification Methods.

    PubMed

    Jiménez-Carvelo, Ana M; González-Casado, Antonio; Pérez-Castaño, Estefanía; Cuadros-Rodríguez, Luis

    2017-03-01

    A new analytical method for the differentiation of olive oil from other vegetable oils using reversed-phase LC and applying chemometric techniques was developed. A 3 cm short column was used to obtain the chromatographic fingerprint of the methyl-transesterified fraction of each vegetable oil. The chromatographic analysis took only 4 min. The multivariate classification methods used were k-nearest neighbors, partial least-squares (PLS) discriminant analysis, one-class PLS, support vector machine classification, and soft independent modeling of class analogies. The discrimination of olive oil from other vegetable edible oils was evaluated by several classification quality metrics. Several strategies for the classification of the olive oil were used: one input-class, two input-class, and pseudo two input-class.

  4. Cluster Method Analysis of K. S. C. Image

    NASA Technical Reports Server (NTRS)

    Rodriguez, Joe, Jr.; Desai, M.

    1997-01-01

    Information obtained from satellite-based systems has moved to the forefront as a method in the identification of many land cover types. Identification of different land features through remote sensing is an effective tool for regional and global assessment of geometric characteristics. Classification data acquired from remote sensing images have a wide variety of applications. In particular, analysis of remote sensing images have special applications in the classification of various types of vegetation. Results obtained from classification studies of a particular area or region serve towards a greater understanding of what parameters (ecological, temporal, etc.) affect the region being analyzed. In this paper, we make a distinction between both types of classification approaches although, focus is given to the unsupervised classification method using 1987 Thematic Mapped (TM) images of Kennedy Space Center.

  5. Shear wave speed recovery using moving interference patterns obtained in sonoelastography experiments.

    PubMed

    McLaughlin, Joyce; Renzi, Daniel; Parker, Kevin; Wu, Zhe

    2007-04-01

    Two new experiments were created to characterize the elasticity of soft tissue using sonoelastography. In both experiments the spectral variance image displayed on a GE LOGIC 700 ultrasound machine shows a moving interference pattern that travels at a very small fraction of the shear wave speed. The goal of this paper is to devise and test algorithms to calculate the speed of the moving interference pattern using the arrival times of these same patterns. A geometric optics expansion is used to obtain Eikonal equations relating the moving interference pattern arrival times to the moving interference pattern speed and then to the shear wave speed. A cross-correlation procedure is employed to find the arrival times; and an inverse Eikonal solver called the level curve method computes the speed of the interference pattern. The algorithm is tested on data from a phantom experiment performed at the University of Rochester Center for Biomedical Ultrasound.

  6. Semi-Automated Classification of Seafloor Data Collected on the Delmarva Inner Shelf

    NASA Astrophysics Data System (ADS)

    Sweeney, E. M.; Pendleton, E. A.; Brothers, L. L.; Mahmud, A.; Thieler, E. R.

    2017-12-01

    We tested automated classification methods on acoustic bathymetry and backscatter data collected by the U.S. Geological Survey (USGS) and National Oceanic and Atmospheric Administration (NOAA) on the Delmarva inner continental shelf to efficiently and objectively identify sediment texture and geomorphology. Automated classification techniques are generally less subjective and take significantly less time than manual classification methods. We used a semi-automated process combining unsupervised and supervised classification techniques to characterize seafloor based on bathymetric slope and relative backscatter intensity. Statistical comparison of our automated classification results with those of a manual classification conducted on a subset of the acoustic imagery indicates that our automated method was highly accurate (95% total accuracy and 93% Kappa). Our methods resolve sediment ridges, zones of flat seafloor and areas of high and low backscatter. We compared our classification scheme with mean grain size statistics of samples collected in the study area and found that strong correlations between backscatter intensity and sediment texture exist. High backscatter zones are associated with the presence of gravel and shells mixed with sand, and low backscatter areas are primarily clean sand or sand mixed with mud. Slope classes further elucidate textural and geomorphologic differences in the seafloor, such that steep slopes (>0.35°) with high backscatter are most often associated with the updrift side of sand ridges and bedforms, whereas low slope with high backscatter correspond to coarse lag or shell deposits. Low backscatter and high slopes are most often found on the downdrift side of ridges and bedforms, and low backscatter and low slopes identify swale areas and sand sheets. We found that poor acoustic data quality was the most significant cause of inaccurate classification results, which required additional user input to mitigate. Our method worked well along the primarily sandy Delmarva inner continental shelf, and outlines a method that can be used to efficiently and consistently produce surficial geologic interpretations of the seafloor from ground-truthed geophysical or hydrographic data.

  7. Basis of Criminalistic Classification of a Person in Republic Kazakhstan and Republic Mongolia

    ERIC Educational Resources Information Center

    Abdilov, Kanat S.; Zusbaev, Baurzan T.; Naurysbaev, Erlan A.; Nukiev, Berik A.; Nurkina, Zanar B.; Myrzahanov, Erlan N.; Urazalin, Galym T.

    2016-01-01

    In this article reviewed problems of the criminalistic classification building of a person. In the work were used legal formal, logical, comparative legal methods. The author describes classification kinds. Reveal the meaning of classification in criminalistic systematics. Shows types of grounds of criminalistic classification of a person.…

  8. Brain tumor segmentation based on local independent projection-based classification.

    PubMed

    Huang, Meiyan; Yang, Wei; Wu, Yao; Jiang, Jun; Chen, Wufan; Feng, Qianjin

    2014-10-01

    Brain tumor segmentation is an important procedure for early tumor diagnosis and radiotherapy planning. Although numerous brain tumor segmentation methods have been presented, enhancing tumor segmentation methods is still challenging because brain tumor MRI images exhibit complex characteristics, such as high diversity in tumor appearance and ambiguous tumor boundaries. To address this problem, we propose a novel automatic tumor segmentation method for MRI images. This method treats tumor segmentation as a classification problem. Additionally, the local independent projection-based classification (LIPC) method is used to classify each voxel into different classes. A novel classification framework is derived by introducing the local independent projection into the classical classification model. Locality is important in the calculation of local independent projections for LIPC. Locality is also considered in determining whether local anchor embedding is more applicable in solving linear projection weights compared with other coding methods. Moreover, LIPC considers the data distribution of different classes by learning a softmax regression model, which can further improve classification performance. In this study, 80 brain tumor MRI images with ground truth data are used as training data and 40 images without ground truth data are used as testing data. The segmentation results of testing data are evaluated by an online evaluation tool. The average dice similarities of the proposed method for segmenting complete tumor, tumor core, and contrast-enhancing tumor on real patient data are 0.84, 0.685, and 0.585, respectively. These results are comparable to other state-of-the-art methods.

  9. Criteria for Mitral Regurgitation Classification were inadequate for Dilated Cardiomyopathy

    PubMed Central

    Mancuso, Frederico José Neves; Moisés, Valdir Ambrosio; Almeida, Dirceu Rodrigues; Oliveira, Wercules Antonio; Poyares, Dalva; Brito, Flavio Souza; de Paola, Angelo Amato Vincenzo; Carvalho, Antonio Carlos Camargo; Campos, Orlando

    2013-01-01

    Background Mitral regurgitation (MR) is common in patients with dilated cardiomyopathy (DCM). It is unknown whether the criteria for MR classification are inadequate for patients with DCM. Objective We aimed to evaluate the agreement among the four most common echocardiographic methods for MR classification. Methods Ninety patients with DCM were included. Functional MR was classified using four echocardiographic methods: color flow jet area (JA), vena contracta (VC), effective regurgitant orifice area (ERO) and regurgitant volume (RV). MR was classified as mild, moderate or important according to the American Society of Echocardiography criteria and by dividing the values into terciles. The Kappa test was used to evaluate whether the methods agreed, and the Pearson correlation coefficient was used to evaluate the correlation between the absolute values of each method. Results MR classification according to each method was as follows: JA: 26 mild, 44 moderate, 20 important; VC: 12 mild, 72 moderate, 6 important; ERO: 70 mild, 15 moderate, 5 important; RV: 70 mild, 16 moderate, 4 important. The agreement was poor among methods (kappa = 0.11; p < 0.001). It was observed a strong correlation between the absolute values of each method, ranging from 0.70 to 0.95 (p < 0.01) and the agreement was higher when values were divided into terciles (kappa = 0.44; p < 0.01) Conclusion The use of conventional echocardiographic criteria for MR classification seems inadequate in patients with DCM. It is necessary to establish new cutoff values for MR classification in these patients. PMID:24100692

  10. Distribution of Response Time, Cortical, and Cardiac Correlates during Emotional Interference in Persons with Subclinical Psychotic Symptoms

    PubMed Central

    Holper, Lisa K. B.; Aleksandrowicz, Alekandra; Müller, Mario; Ajdacic-Gross, Vladeta; Haker, Helene; Fallgatter, Andreas J.; Hagenmuller, Florence; Kawohl, Wolfram; Rössler, Wulf

    2016-01-01

    A psychosis phenotype can be observed below the threshold of clinical detection. The study aimed to investigate whether subclinical psychotic symptoms are associated with deficits in controlling emotional interference, and whether cortical brain and cardiac correlates of these deficits can be detected using functional near-infrared spectroscopy (fNIRS). A data set derived from a community sample was obtained from the Zurich Program for Sustainable Development of Mental Health Services. 174 subjects (mean age 29.67 ± 6.41, 91 females) were assigned to four groups ranging from low to high levels of subclinical psychotic symptoms (derived from the Symptom Checklist-90-R). Emotional interference was assessed using the emotional Stroop task comprising neutral, positive, and negative conditions. Statistical distributional methods based on delta plots [behavioral response time (RT) data] and quantile analysis (fNIRS data) were applied to evaluate the emotional interference effects. Results showed that both interference effects and disorder-specific (i.e., group-specific) effects could be detected, based on behavioral RTs, cortical hemodynamic signals (brain correlates), and heart rate variability (cardiac correlates). Subjects with high compared to low subclinical psychotic symptoms revealed significantly reduced amplitudes in dorsolateral prefrontal cortices (interference effect, p < 0.001) and middle temporal gyrus (disorder-specific group effect, p < 0.001), supported by behavioral and heart rate results. The present findings indicate that distributional analyses methods can support the detection of emotional interference effects in the emotional Stroop. The results suggested that subjects with high subclinical psychosis exhibit enhanced emotional interference effects. Based on these observations, subclinical psychosis may therefore prove to represent a valid extension of the clinical psychosis phenotype. PMID:27660608

  11. Distribution of Response Time, Cortical, and Cardiac Correlates during Emotional Interference in Persons with Subclinical Psychotic Symptoms.

    PubMed

    Holper, Lisa K B; Aleksandrowicz, Alekandra; Müller, Mario; Ajdacic-Gross, Vladeta; Haker, Helene; Fallgatter, Andreas J; Hagenmuller, Florence; Kawohl, Wolfram; Rössler, Wulf

    2016-01-01

    A psychosis phenotype can be observed below the threshold of clinical detection. The study aimed to investigate whether subclinical psychotic symptoms are associated with deficits in controlling emotional interference, and whether cortical brain and cardiac correlates of these deficits can be detected using functional near-infrared spectroscopy (fNIRS). A data set derived from a community sample was obtained from the Zurich Program for Sustainable Development of Mental Health Services. 174 subjects (mean age 29.67 ± 6.41, 91 females) were assigned to four groups ranging from low to high levels of subclinical psychotic symptoms (derived from the Symptom Checklist-90-R). Emotional interference was assessed using the emotional Stroop task comprising neutral, positive, and negative conditions. Statistical distributional methods based on delta plots [behavioral response time (RT) data] and quantile analysis (fNIRS data) were applied to evaluate the emotional interference effects. Results showed that both interference effects and disorder-specific (i.e., group-specific) effects could be detected, based on behavioral RTs, cortical hemodynamic signals (brain correlates), and heart rate variability (cardiac correlates). Subjects with high compared to low subclinical psychotic symptoms revealed significantly reduced amplitudes in dorsolateral prefrontal cortices (interference effect, p < 0.001) and middle temporal gyrus (disorder-specific group effect, p < 0.001), supported by behavioral and heart rate results. The present findings indicate that distributional analyses methods can support the detection of emotional interference effects in the emotional Stroop. The results suggested that subjects with high subclinical psychosis exhibit enhanced emotional interference effects. Based on these observations, subclinical psychosis may therefore prove to represent a valid extension of the clinical psychosis phenotype.

  12. An enhanced data visualization method for diesel engine malfunction classification using multi-sensor signals.

    PubMed

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-10-21

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.

  13. An Enhanced Data Visualization Method for Diesel Engine Malfunction Classification Using Multi-Sensor Signals

    PubMed Central

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-01-01

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347

  14. Computer classification of remotely sensed multispectral image data by extraction and classification of homogeneous objects

    NASA Technical Reports Server (NTRS)

    Kettig, R. L.

    1975-01-01

    A method of classification of digitized multispectral images is developed and experimentally evaluated on actual earth resources data collected by aircraft and satellite. The method is designed to exploit the characteristic dependence between adjacent states of nature that is neglected by the more conventional simple-symmetric decision rule. Thus contextual information is incorporated into the classification scheme. The principle reason for doing this is to improve the accuracy of the classification. For general types of dependence this would generally require more computation per resolution element than the simple-symmetric classifier. But when the dependence occurs in the form of redundance, the elements can be classified collectively, in groups, therby reducing the number of classifications required.

  15. Evaluation of air quality zone classification methods based on ambient air concentration exposure.

    PubMed

    Freeman, Brian; McBean, Ed; Gharabaghi, Bahram; Thé, Jesse

    2017-05-01

    Air quality zones are used by regulatory authorities to implement ambient air standards in order to protect human health. Air quality measurements at discrete air monitoring stations are critical tools to determine whether an air quality zone complies with local air quality standards or is noncompliant. This study presents a novel approach for evaluation of air quality zone classification methods by breaking the concentration distribution of a pollutant measured at an air monitoring station into compliance and exceedance probability density functions (PDFs) and then using Monte Carlo analysis with the Central Limit Theorem to estimate long-term exposure. The purpose of this paper is to compare the risk associated with selecting one ambient air classification approach over another by testing the possible exposure an individual living within a zone may face. The chronic daily intake (CDI) is utilized to compare different pollutant exposures over the classification duration of 3 years between two classification methods. Historical data collected from air monitoring stations in Kuwait are used to build representative models of 1-hr NO 2 and 8-hr O 3 within a zone that meets the compliance requirements of each method. The first method, the "3 Strike" method, is a conservative approach based on a winner-take-all approach common with most compliance classification methods, while the second, the 99% Rule method, allows for more robust analyses and incorporates long-term trends. A Monte Carlo analysis is used to model the CDI for each pollutant and each method with the zone at a single station and with multiple stations. The model assumes that the zone is already in compliance with air quality standards over the 3 years under the different classification methodologies. The model shows that while the CDI of the two methods differs by 2.7% over the exposure period for the single station case, the large number of samples taken over the duration period impacts the sensitivity of the statistical tests, causing the null hypothesis to fail. Local air quality managers can use either methodology to classify the compliance of an air zone, but must accept that the 99% Rule method may cause exposures that are statistically more significant than the 3 Strike method. A novel method using the Central Limit Theorem and Monte Carlo analysis is used to directly compare different air standard compliance classification methods by estimating the chronic daily intake of pollutants. This method allows air quality managers to rapidly see how individual classification methods may impact individual population groups, as well as to evaluate different pollutants based on dosage and exposure when complete health impacts are not known.

  16. Automated Method of Frequency Determination in Software Metric Data Through the Use of the Multiple Signal Classification (MUSIC) Algorithm

    DTIC Science & Technology

    1998-06-26

    METHOD OF FREQUENCY DETERMINATION 4 IN SOFTWARE METRIC DATA THROUGH THE USE OF THE 5 MULTIPLE SIGNAL CLASSIFICATION ( MUSIC ) ALGORITHM 6 7 STATEMENT OF...graph showing the estimated power spectral 12 density (PSD) generated by the multiple signal classification 13 ( MUSIC ) algorithm from the data set used...implemented in this module; however, it is preferred to use 1 the Multiple Signal Classification ( MUSIC ) algorithm. The MUSIC 2 algorithm is

  17. Dimensionality-varied deep convolutional neural network for spectral-spatial classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Qu, Haicheng; Liang, Xuejian; Liang, Shichao; Liu, Wanjun

    2018-01-01

    Many methods of hyperspectral image classification have been proposed recently, and the convolutional neural network (CNN) achieves outstanding performance. However, spectral-spatial classification of CNN requires an excessively large model, tremendous computations, and complex network, and CNN is generally unable to use the noisy bands caused by water-vapor absorption. A dimensionality-varied CNN (DV-CNN) is proposed to address these issues. There are four stages in DV-CNN and the dimensionalities of spectral-spatial feature maps vary with the stages. DV-CNN can reduce the computation and simplify the structure of the network. All feature maps are processed by more kernels in higher stages to extract more precise features. DV-CNN also improves the classification accuracy and enhances the robustness to water-vapor absorption bands. The experiments are performed on data sets of Indian Pines and Pavia University scene. The classification performance of DV-CNN is compared with state-of-the-art methods, which contain the variations of CNN, traditional, and other deep learning methods. The experiment of performance analysis about DV-CNN itself is also carried out. The experimental results demonstrate that DV-CNN outperforms state-of-the-art methods for spectral-spatial classification and it is also robust to water-vapor absorption bands. Moreover, reasonable parameters selection is effective to improve classification accuracy.

  18. Impervious surface mapping with Quickbird imagery

    PubMed Central

    Lu, Dengsheng; Hetrick, Scott; Moran, Emilio

    2010-01-01

    This research selects two study areas with different urban developments, sizes, and spatial patterns to explore the suitable methods for mapping impervious surface distribution using Quickbird imagery. The selected methods include per-pixel based supervised classification, segmentation-based classification, and a hybrid method. A comparative analysis of the results indicates that per-pixel based supervised classification produces a large number of “salt-and-pepper” pixels, and segmentation based methods can significantly reduce this problem. However, neither method can effectively solve the spectral confusion of impervious surfaces with water/wetland and bare soils and the impacts of shadows. In order to accurately map impervious surface distribution from Quickbird images, manual editing is necessary and may be the only way to extract impervious surfaces from the confused land covers and the shadow problem. This research indicates that the hybrid method consisting of thresholding techniques, unsupervised classification and limited manual editing provides the best performance. PMID:21643434

  19. A wall interference assessment/correction system

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.; Ulbrich, N.; Sickles, W. L.; Qian, Cathy X.

    1992-01-01

    A Wall Signature method, the Hackett method, has been selected to be adapted for the 12-ft Wind Tunnel wall interference assessment/correction (WIAC) system in the present phase. This method uses limited measurements of the static pressure at the wall, in conjunction with the solid wall boundary condition, to determine the strength and distribution of singularities representing the test article. The singularities are used in turn for estimating wall interferences at the model location. The Wall Signature method will be formulated for application to the unique geometry of the 12-ft Tunnel. The development and implementation of a working prototype will be completed, delivered and documented with a software manual. The WIAC code will be validated by conducting numerically simulated experiments rather than actual wind tunnel experiments. The simulations will be used to generate both free-air and confined wind-tunnel flow fields for each of the test articles over a range of test configurations. Specifically, the pressure signature at the test section wall will be computed for the tunnel case to provide the simulated 'measured' data. These data will serve as the input for the WIAC method-Wall Signature method. The performance of the WIAC method then may be evaluated by comparing the corrected parameters with those for the free-air simulation. Each set of wind tunnel/test article numerical simulations provides data to validate the WIAC method. A numerical wind tunnel test simulation is initiated to validate the WIAC methods developed in the project. In the present reported period, the blockage correction has been developed and implemented for a rectangular tunnel as well as the 12-ft Pressure Tunnel. An improved wall interference assessment and correction method for three-dimensional wind tunnel testing is presented in the appendix.

  20. Enantiomeric analysis of overlapped chromatographic profiles in the presence of interferences. Determination of ibuprofen in a pharmaceutical formulation containing homatropine.

    PubMed

    Padró, J M; Osorio-Grisales, J; Arancibia, J A; Olivieri, A C; Castells, C B

    2016-10-07

    In this work, we studied the combination of chemometric methods with chromatographic separations as a strategy applied to the analysis of enantiomers when complete enantioseparation is difficult or requires long analysis times and, in addition, the target signals have interference from the matrix. We present the determination of ibuprofen enantiomers in pharmaceutical formulations containing homatropine as interference by chiral HPLC-DAD detection in combination with partial least-squares algorithms. The method has been applied to samples containing enantiomeric ratios from 95:5 to 99.5:0.5 and coelution of interferents. The results were validated using univariate calibration and without homatropine. Relative error of the method was less than 4.0%, for both enantiomers. Limits of detection (LOD) and quantification (LOQ) for (S)-(+)-ibuprofen were 4.96×10 -10 and 1.50×10 -9 mol, respectively. LOD and LOQ for the R-(-)-ibuprofen were LOD=1.60×10 -11 mol and LOQ=4.85×10 -11 mol, respectively. Finally, the chemometric method was applied to the determination of enantiomeric purity of commercial pharmaceuticals. The ultimate goal of this research was the development of rapid, reliable, and robust methods for assessing enantiomeric purity by conventional diode array detector assisted by chemometric tools. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Automatic adventitious respiratory sound analysis: A systematic review.

    PubMed

    Pramono, Renard Xaviero Adhi; Bowyer, Stuart; Rodriguez-Villegas, Esther

    2017-01-01

    Automatic detection or classification of adventitious sounds is useful to assist physicians in diagnosing or monitoring diseases such as asthma, Chronic Obstructive Pulmonary Disease (COPD), and pneumonia. While computerised respiratory sound analysis, specifically for the detection or classification of adventitious sounds, has recently been the focus of an increasing number of studies, a standardised approach and comparison has not been well established. To provide a review of existing algorithms for the detection or classification of adventitious respiratory sounds. This systematic review provides a complete summary of methods used in the literature to give a baseline for future works. A systematic review of English articles published between 1938 and 2016, searched using the Scopus (1938-2016) and IEEExplore (1984-2016) databases. Additional articles were further obtained by references listed in the articles found. Search terms included adventitious sound detection, adventitious sound classification, abnormal respiratory sound detection, abnormal respiratory sound classification, wheeze detection, wheeze classification, crackle detection, crackle classification, rhonchi detection, rhonchi classification, stridor detection, stridor classification, pleural rub detection, pleural rub classification, squawk detection, and squawk classification. Only articles were included that focused on adventitious sound detection or classification, based on respiratory sounds, with performance reported and sufficient information provided to be approximately repeated. Investigators extracted data about the adventitious sound type analysed, approach and level of analysis, instrumentation or data source, location of sensor, amount of data obtained, data management, features, methods, and performance achieved. A total of 77 reports from the literature were included in this review. 55 (71.43%) of the studies focused on wheeze, 40 (51.95%) on crackle, 9 (11.69%) on stridor, 9 (11.69%) on rhonchi, and 18 (23.38%) on other sounds such as pleural rub, squawk, as well as the pathology. Instrumentation used to collect data included microphones, stethoscopes, and accelerometers. Several references obtained data from online repositories or book audio CD companions. Detection or classification methods used varied from empirically determined thresholds to more complex machine learning techniques. Performance reported in the surveyed works were converted to accuracy measures for data synthesis. Direct comparison of the performance of surveyed works cannot be performed as the input data used by each was different. A standard validation method has not been established, resulting in different works using different methods and performance measure definitions. A review of the literature was performed to summarise different analysis approaches, features, and methods used for the analysis. The performance of recent studies showed a high agreement with conventional non-automatic identification. This suggests that automated adventitious sound detection or classification is a promising solution to overcome the limitations of conventional auscultation and to assist in the monitoring of relevant diseases.

  2. Automatic adventitious respiratory sound analysis: A systematic review

    PubMed Central

    Bowyer, Stuart; Rodriguez-Villegas, Esther

    2017-01-01

    Background Automatic detection or classification of adventitious sounds is useful to assist physicians in diagnosing or monitoring diseases such as asthma, Chronic Obstructive Pulmonary Disease (COPD), and pneumonia. While computerised respiratory sound analysis, specifically for the detection or classification of adventitious sounds, has recently been the focus of an increasing number of studies, a standardised approach and comparison has not been well established. Objective To provide a review of existing algorithms for the detection or classification of adventitious respiratory sounds. This systematic review provides a complete summary of methods used in the literature to give a baseline for future works. Data sources A systematic review of English articles published between 1938 and 2016, searched using the Scopus (1938-2016) and IEEExplore (1984-2016) databases. Additional articles were further obtained by references listed in the articles found. Search terms included adventitious sound detection, adventitious sound classification, abnormal respiratory sound detection, abnormal respiratory sound classification, wheeze detection, wheeze classification, crackle detection, crackle classification, rhonchi detection, rhonchi classification, stridor detection, stridor classification, pleural rub detection, pleural rub classification, squawk detection, and squawk classification. Study selection Only articles were included that focused on adventitious sound detection or classification, based on respiratory sounds, with performance reported and sufficient information provided to be approximately repeated. Data extraction Investigators extracted data about the adventitious sound type analysed, approach and level of analysis, instrumentation or data source, location of sensor, amount of data obtained, data management, features, methods, and performance achieved. Data synthesis A total of 77 reports from the literature were included in this review. 55 (71.43%) of the studies focused on wheeze, 40 (51.95%) on crackle, 9 (11.69%) on stridor, 9 (11.69%) on rhonchi, and 18 (23.38%) on other sounds such as pleural rub, squawk, as well as the pathology. Instrumentation used to collect data included microphones, stethoscopes, and accelerometers. Several references obtained data from online repositories or book audio CD companions. Detection or classification methods used varied from empirically determined thresholds to more complex machine learning techniques. Performance reported in the surveyed works were converted to accuracy measures for data synthesis. Limitations Direct comparison of the performance of surveyed works cannot be performed as the input data used by each was different. A standard validation method has not been established, resulting in different works using different methods and performance measure definitions. Conclusion A review of the literature was performed to summarise different analysis approaches, features, and methods used for the analysis. The performance of recent studies showed a high agreement with conventional non-automatic identification. This suggests that automated adventitious sound detection or classification is a promising solution to overcome the limitations of conventional auscultation and to assist in the monitoring of relevant diseases. PMID:28552969

  3. Classification and mensuration of LACIE segments

    NASA Technical Reports Server (NTRS)

    Heydorn, R. P.; Bizzell, R. M.; Quirein, J. A.; Abotteen, K. M.; Sumner, C. A. (Principal Investigator)

    1979-01-01

    The theory of classification methods and the functional steps in the manual training process used in the three phases of LACIE are discussed. The major problems that arose in using a procedure for manually training a classifier and a method of machine classification are discussed to reveal the motivation that led to a redesign for the third LACIE phase.

  4. Improving rainfall estimation from commercial microwave links using METEOSAT SEVIRI cloud cover information

    NASA Astrophysics Data System (ADS)

    Boose, Yvonne; Doumounia, Ali; Chwala, Christian; Moumouni, Sawadogo; Zougmoré, François; Kunstmann, Harald

    2017-04-01

    The number of rain gauges is declining worldwide. A recent promising method for alternative precipitation measurements is to derive rain rates from the attenuation of the microwave signal between remote antennas of mobile phone base stations, so called commercial microwave links (CMLs). In European countries, such as Germany, the CML technique can be used as a complementary method to the existing gauge and radar networks improving their products, for example, in mountainous terrain and urban areas. In West African countries, where a dense gauge or radar network is absent, the number of mobile phone users is rapidly increasing and so are the CML networks. Hence, the CML-derived precipitation measurements have high potential for applications such as flood warning and support of agricultural planning in this region. For typical CML bandwidths (10-40 GHz), the relationship of attenuation to rain rate is quasi-linear. However, also humidity, wet antennas or electronic noise can lead to signal interference. To distinguish these fluctuations from actual attenuation due to rain, a temporal wet (rain event occurred)/ dry (no rain event) classification is usually necessary. In dense CML networks this is possible by correlating neighboring CML time series. Another option is to use the correlation between signal time series of different frequencies or bidirectional signals. The CML network in rural areas is typically not dense enough for correlation analysis and often only one polarization and one frequency are available along a CML. In this work we therefore use cloud cover information derived from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) radiometer onboard the geostationary satellite METEOSAT for a wet (pixels along link are cloud covered)/ dry (no cloud along link) classification. We compare results for CMLs in Burkina Faso and Germany, which differ meteorologically (rain rate and duration, droplet size distributions) and technically (CML frequencies, lengths, signal level) and use rain gauge data as ground truth for validation.

  5. A new tool for post-AGB SED classification

    NASA Astrophysics Data System (ADS)

    Bendjoya, P.; Suarez, O.; Galluccio, L.; Michel, O.

    We present the results of an unsupervised classification method applied on a set of 344 spectral energy distributions (SED) of post-AGB stars extracted from the Torun catalogue of Galactic post-AGB stars. This method aims to find a new unbiased method for post-AGB star classification based on the information contained in the IR region of the SED (fluxes, IR excess, colours). We used the data from IRAS and MSX satellites, and from the 2MASS survey. We applied a classification method based on the construction of the dataset of a minimal spanning tree (MST) with the Prim's algorithm. In order to build this tree, different metrics have been tested on both flux and color indices. Our method is able to classify the set of 344 post-AGB stars in 9 distinct groups according to their SEDs.

  6. Interference Method for Obtaining the Potential Flow Past an Arbitrary Cascade of Airfoils

    DTIC Science & Technology

    1947-05-01

    1 ’ m2?mN copy.*”..... .#,. NATIONALADVISORYCOMMITTEE FORAERONAUTICS TECHNICALNOTE No. 1252 INTERFERENCE METHOD FOR OBTAI?N’ENGTHE POTENTIAL FLOW PAST...infinitelattices.Mostofthesestudiesinvolveapproximate procedures(foremmple,references 1 to 3) or presentsolutions forspecialclassesofshapes...ofinterferingbodies, . b u . NACATNNO,1252 v r. K v 7 m z Z* 1 t c c]. a ~ 0 s P u x * a ~ AP c) circtiation IBq@ng-fUnctilxl localvelocity

  7. Quantitative interference by cysteine and N-acetylcysteine metabolites during the LC-MS/MS bioanalysis of a small molecule.

    PubMed

    Barricklow, Jason; Ryder, Tim F; Furlong, Michael T

    2009-08-01

    During LC-MS/MS quantification of a small molecule in human urine samples from a clinical study, an unexpected peak was observed to nearly co-elute with the analyte of interest in many study samples. Improved chromatographic resolution revealed the presence of at least 3 non-analyte peaks, which were identified as cysteine metabolites and N-acetyl (mercapturic acid) derivatives thereof. These metabolites produced artifact responses in the parent compound MRM channel due to decomposition in the ionization source of the mass spectrometer. Quantitative comparison of the analyte concentrations in study samples using the original chromatographic method and the improved chromatographic separation method demonstrated that the original method substantially over-estimated the analyte concentration in many cases. The substitution of electrospray ionization (ESI) for atmospheric pressure chemical ionization (APCI) nearly eliminated the source instability of these metabolites, which would have mitigated their interference in the quantification of the analyte, even without chromatographic separation. These results 1) demonstrate the potential for thiol metabolite interferences during the quantification of small molecules in pharmacokinetic samples, and 2) underscore the need to carefully evaluate LC-MS/MS methods for molecules that can undergo metabolism to thiol adducts to ensure that they are not susceptible to such interferences during quantification.

  8. Land Covers Classification Based on Random Forest Method Using Features from Full-Waveform LIDAR Data

    NASA Astrophysics Data System (ADS)

    Ma, L.; Zhou, M.; Li, C.

    2017-09-01

    In this study, a Random Forest (RF) based land covers classification method is presented to predict the types of land covers in Miyun area. The returned full-waveforms which were acquired by a LiteMapper 5600 airborne LiDAR system were processed, including waveform filtering, waveform decomposition and features extraction. The commonly used features that were distance, intensity, Full Width at Half Maximum (FWHM), skewness and kurtosis were extracted. These waveform features were used as attributes of training data for generating the RF prediction model. The RF prediction model was applied to predict the types of land covers in Miyun area as trees, buildings, farmland and ground. The classification results of these four types of land covers were obtained according to the ground truth information acquired from CCD image data of the same region. The RF classification results were compared with that of SVM method and show better results. The RF classification accuracy reached 89.73% and the classification Kappa was 0.8631.

  9. Interference studies with two hospital-grade and two home-grade glucose meters.

    PubMed

    Lyon, Martha E; Baskin, Leland B; Braakman, Sandy; Presti, Steven; Dubois, Jeffrey; Shirey, Terry

    2009-10-01

    Interference studies of four glucose meters (Nova Biomedical [Waltham, MA] StatStrip [hospital grade], Roche Diagnostics [Indianapolis, IN] Accu-Chek Aviva [home grade], Abbott Diabetes Care [Alameda, CA] Precision FreeStyle Freedom [home grade], and LifeScan [Milpitas, CA] SureStep Flexx [hospital grade]) were evaluated and compared to the clinical laboratory plasma hexokinase reference method (Roche Hitachi 912 chemistry analyzer). These meters were chosen to reflect the continuum of care from hospital to home grade meters commonly seen in North America. Within-run precision was determined using a freshly prepared whole blood sample spiked with concentrated glucose to give three glucose concentrations. Day-to-day precision was evaluated using aqueous control materials supplied by each vendor. Common interferences, including hematocrit, maltose, and ascorbate, were tested alone and in combination with one another on each of the four glucose testing devices at three blood glucose concentrations. Within-run precision for all glucose meters was <5% except for the FreeStyle (up to 7.6%). Between-day precision was <6% for all glucose meters. Ascorbate caused differences (percentage change from a sample without added interfering substances) of >5% with pyrroloquinolinequinone (PQQ)-glucose dehydrogenase-based technologies (Aviva and Freestyle) and the glucose oxidase-based Flexx meter. Maltose strongly affected the PQQ-glucose dehydrogenase-based meter systems. When combinations of interferences (ascorbate, maltose, and hematocrit mixtures) were tested, the extent of the interference was up to 193% (Aviva), 179% (FreeStyle), 25.1% (Flexx), and 5.9% (StatStrip). The interference was most pronounced at low glucose (3.9-4.4 mmol/L). All evaluated glucose meter systems demonstrated varying degrees of interference by hematocrit, ascorbate, and maltose mixtures. PQQ-glucose dehydrogenase-based technologies showed greater susceptibility than glucose oxidase-based systems. However, the modified glucose oxidase-based amperometric method (Nova StatStrip) was less affected in comparison with the glucose oxidase-based photometric method (LifeScan SureStep Flexx).

  10. Automated diagnosis of myositis from muscle ultrasound: Exploring the use of machine learning and deep learning methods.

    PubMed

    Burlina, Philippe; Billings, Seth; Joshi, Neil; Albayda, Jemima

    2017-01-01

    To evaluate the use of ultrasound coupled with machine learning (ML) and deep learning (DL) techniques for automated or semi-automated classification of myositis. Eighty subjects comprised of 19 with inclusion body myositis (IBM), 14 with polymyositis (PM), 14 with dermatomyositis (DM), and 33 normal (N) subjects were included in this study, where 3214 muscle ultrasound images of 7 muscles (observed bilaterally) were acquired. We considered three problems of classification including (A) normal vs. affected (DM, PM, IBM); (B) normal vs. IBM patients; and (C) IBM vs. other types of myositis (DM or PM). We studied the use of an automated DL method using deep convolutional neural networks (DL-DCNNs) for diagnostic classification and compared it with a semi-automated conventional ML method based on random forests (ML-RF) and "engineered" features. We used the known clinical diagnosis as the gold standard for evaluating performance of muscle classification. The performance of the DL-DCNN method resulted in accuracies ± standard deviation of 76.2% ± 3.1% for problem (A), 86.6% ± 2.4% for (B) and 74.8% ± 3.9% for (C), while the ML-RF method led to accuracies of 72.3% ± 3.3% for problem (A), 84.3% ± 2.3% for (B) and 68.9% ± 2.5% for (C). This study demonstrates the application of machine learning methods for automatically or semi-automatically classifying inflammatory muscle disease using muscle ultrasound. Compared to the conventional random forest machine learning method used here, which has the drawback of requiring manual delineation of muscle/fat boundaries, DCNN-based classification by and large improved the accuracies in all classification problems while providing a fully automated approach to classification.

  11. Label-aligned Multi-task Feature Learning for Multimodal Classification of Alzheimer’s Disease and Mild Cognitive Impairment

    PubMed Central

    Zu, Chen; Jie, Biao; Liu, Mingxia; Chen, Songcan

    2015-01-01

    Multimodal classification methods using different modalities of imaging and non-imaging data have recently shown great advantages over traditional single-modality-based ones for diagnosis and prognosis of Alzheimer’s disease (AD), as well as its prodromal stage, i.e., mild cognitive impairment (MCI). However, to the best of our knowledge, most existing methods focus on mining the relationship across multiple modalities of the same subjects, while ignoring the potentially useful relationship across different subjects. Accordingly, in this paper, we propose a novel learning method for multimodal classification of AD/MCI, by fully exploring the relationships across both modalities and subjects. Specifically, our proposed method includes two subsequent components, i.e., label-aligned multi-task feature selection and multimodal classification. In the first step, the feature selection learning from multiple modalities are treated as different learning tasks and a group sparsity regularizer is imposed to jointly select a subset of relevant features. Furthermore, to utilize the discriminative information among labeled subjects, a new label-aligned regularization term is added into the objective function of standard multi-task feature selection, where label-alignment means that all multi-modality subjects with the same class labels should be closer in the new feature-reduced space. In the second step, a multi-kernel support vector machine (SVM) is adopted to fuse the selected features from multi-modality data for final classification. To validate our method, we perform experiments on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using baseline MRI and FDG-PET imaging data. The experimental results demonstrate that our proposed method achieves better classification performance compared with several state-of-the-art methods for multimodal classification of AD/MCI. PMID:26572145

  12. Feature selection for the classification of traced neurons.

    PubMed

    López-Cabrera, José D; Lorenzo-Ginori, Juan V

    2018-06-01

    The great availability of computational tools to calculate the properties of traced neurons leads to the existence of many descriptors which allow the automated classification of neurons from these reconstructions. This situation determines the necessity to eliminate irrelevant features as well as making a selection of the most appropriate among them, in order to improve the quality of the classification obtained. The dataset used contains a total of 318 traced neurons, classified by human experts in 192 GABAergic interneurons and 126 pyramidal cells. The features were extracted by means of the L-measure software, which is one of the most used computational tools in neuroinformatics to quantify traced neurons. We review some current feature selection techniques as filter, wrapper, embedded and ensemble methods. The stability of the feature selection methods was measured. For the ensemble methods, several aggregation methods based on different metrics were applied to combine the subsets obtained during the feature selection process. The subsets obtained applying feature selection methods were evaluated using supervised classifiers, among which Random Forest, C4.5, SVM, Naïve Bayes, Knn, Decision Table and the Logistic classifier were used as classification algorithms. Feature selection methods of types filter, embedded, wrappers and ensembles were compared and the subsets returned were tested in classification tasks for different classification algorithms. L-measure features EucDistanceSD, PathDistanceSD, Branch_pathlengthAve, Branch_pathlengthSD and EucDistanceAve were present in more than 60% of the selected subsets which provides evidence about their importance in the classification of this neurons. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Data Processing And Machine Learning Methods For Multi-Modal Operator State Classification Systems

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan A.

    2015-01-01

    This document is intended as an introduction to a set of common signal processing learning methods that may be used in the software portion of a functional crew state monitoring system. This includes overviews of both the theory of the methods involved, as well as examples of implementation. Practical considerations are discussed for implementing modular, flexible, and scalable processing and classification software for a multi-modal, multi-channel monitoring system. Example source code is also given for all of the discussed processing and classification methods.

  14. Multi-Feature Classification of Multi-Sensor Satellite Imagery Based on Dual-Polarimetric Sentinel-1A, Landsat-8 OLI, and Hyperion Images for Urban Land-Cover Classification.

    PubMed

    Zhou, Tao; Li, Zhaofu; Pan, Jianjun

    2018-01-27

    This paper focuses on evaluating the ability and contribution of using backscatter intensity, texture, coherence, and color features extracted from Sentinel-1A data for urban land cover classification and comparing different multi-sensor land cover mapping methods to improve classification accuracy. Both Landsat-8 OLI and Hyperion images were also acquired, in combination with Sentinel-1A data, to explore the potential of different multi-sensor urban land cover mapping methods to improve classification accuracy. The classification was performed using a random forest (RF) method. The results showed that the optimal window size of the combination of all texture features was 9 × 9, and the optimal window size was different for each individual texture feature. For the four different feature types, the texture features contributed the most to the classification, followed by the coherence and backscatter intensity features; and the color features had the least impact on the urban land cover classification. Satisfactory classification results can be obtained using only the combination of texture and coherence features, with an overall accuracy up to 91.55% and a kappa coefficient up to 0.8935, respectively. Among all combinations of Sentinel-1A-derived features, the combination of the four features had the best classification result. Multi-sensor urban land cover mapping obtained higher classification accuracy. The combination of Sentinel-1A and Hyperion data achieved higher classification accuracy compared to the combination of Sentinel-1A and Landsat-8 OLI images, with an overall accuracy of up to 99.12% and a kappa coefficient up to 0.9889. When Sentinel-1A data was added to Hyperion images, the overall accuracy and kappa coefficient were increased by 4.01% and 0.0519, respectively.

  15. An approach for combining airborne LiDAR and high-resolution aerial color imagery using Gaussian processes

    NASA Astrophysics Data System (ADS)

    Liu, Yansong; Monteiro, Sildomar T.; Saber, Eli

    2015-10-01

    Changes in vegetation cover, building construction, road network and traffic conditions caused by urban expansion affect the human habitat as well as the natural environment in rapidly developing cities. It is crucial to assess these changes and respond accordingly by identifying man-made and natural structures with accurate classification algorithms. With the increase in use of multi-sensor remote sensing systems, researchers are able to obtain a more complete description of the scene of interest. By utilizing multi-sensor data, the accuracy of classification algorithms can be improved. In this paper, we propose a method for combining 3D LiDAR point clouds and high-resolution color images to classify urban areas using Gaussian processes (GP). GP classification is a powerful non-parametric classification method that yields probabilistic classification results. It makes predictions in a way that addresses the uncertainty of real world. In this paper, we attempt to identify man-made and natural objects in urban areas including buildings, roads, trees, grass, water and vehicles. LiDAR features are derived from the 3D point clouds and the spatial and color features are extracted from RGB images. For classification, we use the Laplacian approximation for GP binary classification on the new combined feature space. The multiclass classification has been implemented by using one-vs-all binary classification strategy. The result of applying support vector machines (SVMs) and logistic regression (LR) classifier is also provided for comparison. Our experiments show a clear improvement of classification results by using the two sensors combined instead of each sensor separately. Also we found the advantage of applying GP approach to handle the uncertainty in classification result without compromising accuracy compared to SVM, which is considered as the state-of-the-art classification method.

  16. Comparison of Enzymatic Assay for HBA1C Measurement (Abbott Architect) With Capillary Electrophoresis (Sebia Minicap Flex Piercing Analyser).

    PubMed

    Tesija Kuna, Andrea; Dukic, Kristina; Nikolac Gabaj, Nora; Miler, Marijana; Vukasovic, Ines; Langer, Sanja; Simundic, Ana-Maria; Vrkic, Nada

    2018-03-08

    To compare the analytical performances of the enzymatic method (EM) and capillary electrophoresis (CE) for hemoglobin A1c (HbA1c) measurement. Imprecision, carryover, stability, linearity, method comparison, and interferences were evaluated for HbA1c via EM (Abbott Laboratories, Inc) and CE (Sebia). Both methods have shown overall within-laboratory imprecision of less than 3% for International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) units (<2% National Glycohemoglobin Standardization Program [NGSP] units). Carryover effects were within acceptable criteria. The linearity of both methods has proven to be excellent (R2 = 0.999). Significant proportional and constant difference were found for EM, compared with CE, but were not clinically relevant (<5 mmol/mol; NGSP <0.5%). At the clinically relevant HbA1c concentration, stability observed with both methods was acceptable (bias, <3%). Triglyceride levels of 8.11 mmol per L or greater showed to interfere with EM and fetal hemoglobin (HbF) of 10.6% or greater with CE. The enzymatic method proved to be comparable to the CE method in analytical performances; however, certain interferences can influence the measurements of each method.

  17. Individual subject classification for Alzheimer's disease based on incremental learning using a spatial frequency representation of cortical thickness data.

    PubMed

    Cho, Youngsang; Seong, Joon-Kyung; Jeong, Yong; Shin, Sung Yong

    2012-02-01

    Patterns of brain atrophy measured by magnetic resonance structural imaging have been utilized as significant biomarkers for diagnosis of Alzheimer's disease (AD). However, brain atrophy is variable across patients and is non-specific for AD in general. Thus, automatic methods for AD classification require a large number of structural data due to complex and variable patterns of brain atrophy. In this paper, we propose an incremental method for AD classification using cortical thickness data. We represent the cortical thickness data of a subject in terms of their spatial frequency components, employing the manifold harmonic transform. The basis functions for this transform are obtained from the eigenfunctions of the Laplace-Beltrami operator, which are dependent only on the geometry of a cortical surface but not on the cortical thickness defined on it. This facilitates individual subject classification based on incremental learning. In general, methods based on region-wise features poorly reflect the detailed spatial variation of cortical thickness, and those based on vertex-wise features are sensitive to noise. Adopting a vertex-wise cortical thickness representation, our method can still achieve robustness to noise by filtering out high frequency components of the cortical thickness data while reflecting their spatial variation. This compromise leads to high accuracy in AD classification. We utilized MR volumes provided by Alzheimer's Disease Neuroimaging Initiative (ADNI) to validate the performance of the method. Our method discriminated AD patients from Healthy Control (HC) subjects with 82% sensitivity and 93% specificity. It also discriminated Mild Cognitive Impairment (MCI) patients, who converted to AD within 18 months, from non-converted MCI subjects with 63% sensitivity and 76% specificity. Moreover, it showed that the entorhinal cortex was the most discriminative region for classification, which is consistent with previous pathological findings. In comparison with other classification methods, our method demonstrated high classification performance in both categories, which supports the discriminative power of our method in both AD diagnosis and AD prediction. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. A model-based test for treatment effects with probabilistic classifications.

    PubMed

    Cavagnaro, Daniel R; Davis-Stober, Clintin P

    2018-05-21

    Within modern psychology, computational and statistical models play an important role in describing a wide variety of human behavior. Model selection analyses are typically used to classify individuals according to the model(s) that best describe their behavior. These classifications are inherently probabilistic, which presents challenges for performing group-level analyses, such as quantifying the effect of an experimental manipulation. We answer this challenge by presenting a method for quantifying treatment effects in terms of distributional changes in model-based (i.e., probabilistic) classifications across treatment conditions. The method uses hierarchical Bayesian mixture modeling to incorporate classification uncertainty at the individual level into the test for a treatment effect at the group level. We illustrate the method with several worked examples, including a reanalysis of the data from Kellen, Mata, and Davis-Stober (2017), and analyze its performance more generally through simulation studies. Our simulations show that the method is both more powerful and less prone to type-1 errors than Fisher's exact test when classifications are uncertain. In the special case where classifications are deterministic, we find a near-perfect power-law relationship between the Bayes factor, derived from our method, and the p value obtained from Fisher's exact test. We provide code in an online supplement that allows researchers to apply the method to their own data. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  19. Acoustic⁻Seismic Mixed Feature Extraction Based on Wavelet Transform for Vehicle Classification in Wireless Sensor Networks.

    PubMed

    Zhang, Heng; Pan, Zhongming; Zhang, Wenna

    2018-06-07

    An acoustic⁻seismic mixed feature extraction method based on the wavelet coefficient energy ratio (WCER) of the target signal is proposed in this study for classifying vehicle targets in wireless sensor networks. The signal was decomposed into a set of wavelet coefficients using the à trous algorithm, which is a concise method used to implement the wavelet transform of a discrete signal sequence. After the wavelet coefficients of the target acoustic and seismic signals were obtained, the energy ratio of each layer coefficient was calculated as the feature vector of the target signals. Subsequently, the acoustic and seismic features were merged into an acoustic⁻seismic mixed feature to improve the target classification accuracy after the acoustic and seismic WCER features of the target signal were simplified using the hierarchical clustering method. We selected the support vector machine method for classification and utilized the data acquired from a real-world experiment to validate the proposed method. The calculated results show that the WCER feature extraction method can effectively extract the target features from target signals. Feature simplification can reduce the time consumption of feature extraction and classification, with no effect on the target classification accuracy. The use of acoustic⁻seismic mixed features effectively improved target classification accuracy by approximately 12% compared with either acoustic signal or seismic signal alone.

  20. Hepatic fat quantification: a prospective comparison of magnetic resonance spectroscopy and analysis methods for chemical-shift gradient echo magnetic resonance imaging with histologic assessment as the reference standard.

    PubMed

    Kang, Bo-Kyeong; Yu, Eun Sil; Lee, Seung Soo; Lee, Youngjoo; Kim, Namkug; Sirlin, Claude B; Cho, Eun Yoon; Yeom, Suk Keu; Byun, Jae Ho; Park, Seong Ho; Lee, Moon-Gyu

    2012-06-01

    The aims of this study were to assess the confounding effects of hepatic iron deposition, inflammation, and fibrosis on hepatic steatosis (HS) evaluation by magnetic resonance imaging (MRI) and magnetic resonance spectroscopy (MRS) and to assess the accuracies of MRI and MRS for HS evaluation, using histology as the reference standard. In this institutional review board-approved prospective study, 56 patients gave informed consents and underwent chemical-shift MRI and MRS of the liver on a 1.5-T magnetic resonance scanner. To estimate MRI fat fraction (FF), 4 analysis methods were used (dual-echo, triple-echo, multiecho, and multi-interference), and MRS FF was calculated with T2 correction. Degrees of HS, iron deposition, inflammation, and fibrosis were analyzed in liver resection (n = 37) and biopsy (n = 19) specimens. The confounding effects of histology on fat quantification were assessed by multiple linear regression analysis. Using the histologic degree of HS as the reference standard, the accuracies of each method in estimating HS and diagnosing an HS of 5% or greater were determined by linear regression and receiver operating characteristic analyses. Iron deposition significantly confounded estimations of FF by the dual-echo (P < 0.001) and triple-echo (P = 0.033) methods, whereas no histologic feature confounded the multiecho and multi-interference methods or MRS. The MRS (r = 0.95) showed the strongest correlation with histologic degree of HS, followed by the multiecho (r = 0.92), multi-interference (r = 0.91), triple-echo (r = 0.90), and dual-echo (r = 0.85) methods. For diagnosing HS, the areas under the curve tended to be higher for MRS (0.96) and the multiecho (0.95), multi-interference (0.95), and triple-echo (0.95) methods than for the dual-echo method (0.88) (P ≥ 0.13). The multiecho and multi-interference MRI methods and MRS can accurately quantify hepatic fat, with coexisting histologic abnormalities having no confounding effects.

  1. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification

    PubMed Central

    Wen, Tingxi; Zhang, Zhongnan

    2017-01-01

    Abstract In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy. PMID:28489789

  2. Classification of Dynamical Diffusion States in Single Molecule Tracking Microscopy

    PubMed Central

    Bosch, Peter J.; Kanger, Johannes S.; Subramaniam, Vinod

    2014-01-01

    Single molecule tracking of membrane proteins by fluorescence microscopy is a promising method to investigate dynamic processes in live cells. Translating the trajectories of proteins to biological implications, such as protein interactions, requires the classification of protein motion within the trajectories. Spatial information of protein motion may reveal where the protein interacts with cellular structures, because binding of proteins to such structures often alters their diffusion speed. For dynamic diffusion systems, we provide an analytical framework to determine in which diffusion state a molecule is residing during the course of its trajectory. We compare different methods for the quantification of motion to utilize this framework for the classification of two diffusion states (two populations with different diffusion speed). We found that a gyration quantification method and a Bayesian statistics-based method are the most accurate in diffusion-state classification for realistic experimentally obtained datasets, of which the gyration method is much less computationally demanding. After classification of the diffusion, the lifetime of the states can be determined, and images of the diffusion states can be reconstructed at high resolution. Simulations validate these applications. We apply the classification and its applications to experimental data to demonstrate the potential of this approach to obtain further insights into the dynamics of cell membrane proteins. PMID:25099798

  3. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification.

    PubMed

    Wen, Tingxi; Zhang, Zhongnan

    2017-05-01

    In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.

  4. A new high-resolution electromagnetic method for subsurface imaging

    NASA Astrophysics Data System (ADS)

    Feng, Wanjie

    For most electromagnetic (EM) geophysical systems, the contamination of primary fields on secondary fields ultimately limits the capability of the controlled-source EM methods. Null coupling techniques were proposed to solve this problem. However, the small orientation errors in the null coupling systems greatly restrict the applications of these systems. Another problem encountered by most EM systems is the surface interference and geologic noise, which sometimes make the geophysical survey impossible to carry out. In order to solve these problems, the alternating target antenna coupling (ATAC) method was introduced, which greatly removed the influence of the primary field and reduced the surface interference. But this system has limitations on the maximum transmitter moment that can be used. The differential target antenna coupling (DTAC) method was proposed to allow much larger transmitter moments and at the same time maintain the advantages of the ATAC method. In this dissertation, first, the theoretical DTAC calculations were derived mathematically using Born and Wolf's complex magnetic vector. 1D layered and 2D blocked earth models were used to demonstrate that the DTAC method has no responses for 1D and 2D structures. Analytical studies of the plate model influenced by conductive and resistive backgrounds were presented to explain the physical phenomenology behind the DTAC method, which is the magnetic fields of the subsurface targets are required to be frequency dependent. Then, the advantages of the DTAC method, e.g., high-resolution, reducing the geologic noise and insensitive to surface interference, were analyzed using surface and subsurface numerical examples in the EMGIMA software. Next, the theoretical advantages, such as high resolution and insensitive to surface interference, were verified by designing and developing a low-power (moment of 50 Am 2) vertical-array DTAC system and testing it on controlled targets and scaled target coils. At last, a high-power (moment of about 6800 Am2) vertical-array DTAC system was designed, developed and tested on controlled buried targets and surface interference to illustrate that the DTAC system was insensitive to surface interference even with a high-power transmitter and having higher resolution by using the large-moment transmitter. From the theoretical and practical analysis and tests, several characteristics of the DTAC method were found: (1) The DTAC method can null out the effect of 1D layered and 2D structures, because magnetic fields are orientation independent which lead to no difference among the null vector directions. This characteristic allows for the measurements of smaller subsurface targets; (2) The DTAC method is insensitive to the orientation errors. It is a robust EM null coupling method. Even large orientation errors do not affect the measured target responses, when a reference frequency and one or more data frequencies are used; (3) The vertical-array DTAC method is effective in reducing the geologic noise and insensitive to the surface interference, e.g., fences, vehicles, power line and buildings; (4) The DTAC method is a high-resolution EM sounding method. It can distinguish the depth and orientation of subsurface targets; (5) The vertical-array DTAC method can be adapted to a variety of rapidly moving survey applications. The transmitter moment can be scaled for effective study of near-surface targets (civil engineering, water resource, and environmental restoration) as well as deep targets (mining and other natural-resource exploration).

  5. Discovery, identification and mitigation of isobaric sulfate metabolite interference to a phosphate prodrug in LC-MS/MS bioanalysis: Critical role of method development in ensuring assay quality.

    PubMed

    Yuan, Long; Ji, Qin C

    2018-06-05

    Metabolite interferences represent a major risk of inaccurate quantification when using LC-MS/MS bioanalytical assays. During LC-MS/MS bioanalysis of BMS-919194, a phosphate ester prodrug, in plasma samples from rat and monkey GLP toxicology studies, an unknown peak was detected in the MRM channel of the prodrug. This peak was not observed in previous discovery toxicology studies, in which a fast gradient LC-MS/MS method was used. We found out that this unknown peak would co-elute with the prodrug peak when the discovery method was used, therefore, causing significant overestimation of the exposure of the prodrug in the discovery toxicology studies. To understand the nature of this interfering peak and its impact to bioanalytical assay, we further investigated its formation and identification. The interfering compound and the prodrug were found to be isobaric and to have the same major product ions in electrospray ionization positive mode, thus, could not be differentiated using a triple quadrupole mass spectrometer. By using high-resolution mass spectrometry (HRMS), the interfering metabolite was successfully identified to be an isobaric sulfate metabolite of BMS-919194. To the best of our knowledge, this is the first report that a phosphate prodrug was metabolized in vivo to an isobaric sulfate metabolite, and this metabolite caused significant interference to the analysis of the prodrug. This work demonstrated the presence of the interference risk from isobaric sulfate metabolites to the bioanalysis of phosphate prodrugs in real samples. It is critical to evaluate and mitigate potential metabolite interferences during method development, therefore, minimize the related bioanalytical risks and ensure assay quality. Our work also showed the unique advantages of HRMS in identifying potential metabolite interference during LC-MS/MS bioanalysis. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Reduction of ground noise in the transmitter crowbar instrumentation system by the use of baluns and other noise rejection methods

    NASA Technical Reports Server (NTRS)

    Daeges, J.; Bhanji, A.

    1987-01-01

    Electrical noise interference in the transmitter crowbar monitoring instrumentation system creates false sensing of crowbar faults during a crowbar firing. One predominant source of noise interference is the conduction of currents in the instrumentation cable shields. Since these circulating ground noise currents produce noise that is similar to the crowbar fault sensing signals, such noise interference reduces the ability to determine true crowbar faults.

  7. Parameter Estimation and Modeling of Interference Cancellation Technique for Multiple Signal Recovery

    DTIC Science & Technology

    2013-06-01

    Miridakis and D. D. Vergados, “A survey on the successive interference cancellation performance for single-antenna and multiple-antenna OFDM ...in this thesis. Follow on work that focuses on SIC for multi-carrier and MIMO systems would be most beneficial. Other estimation methods exist that...antenna and multiple-antenna OFDM systems,” IEEE Comms. Surveys & Tutorials, vol.15, no. 1, pp. 312–335, 2013. [2] J. G. Andrews, “Interference

  8. Improving medical diagnosis reliability using Boosted C5.0 decision tree empowered by Particle Swarm Optimization.

    PubMed

    Pashaei, Elnaz; Ozen, Mustafa; Aydin, Nizamettin

    2015-08-01

    Improving accuracy of supervised classification algorithms in biomedical applications is one of active area of research. In this study, we improve the performance of Particle Swarm Optimization (PSO) combined with C4.5 decision tree (PSO+C4.5) classifier by applying Boosted C5.0 decision tree as the fitness function. To evaluate the effectiveness of our proposed method, it is implemented on 1 microarray dataset and 5 different medical data sets obtained from UCI machine learning databases. Moreover, the results of PSO + Boosted C5.0 implementation are compared to eight well-known benchmark classification methods (PSO+C4.5, support vector machine under the kernel of Radial Basis Function, Classification And Regression Tree (CART), C4.5 decision tree, C5.0 decision tree, Boosted C5.0 decision tree, Naive Bayes and Weighted K-Nearest neighbor). Repeated five-fold cross-validation method was used to justify the performance of classifiers. Experimental results show that our proposed method not only improve the performance of PSO+C4.5 but also obtains higher classification accuracy compared to the other classification methods.

  9. Classification of complex networks based on similarity of topological network features

    NASA Astrophysics Data System (ADS)

    Attar, Niousha; Aliakbary, Sadegh

    2017-09-01

    Over the past few decades, networks have been widely used to model real-world phenomena. Real-world networks exhibit nontrivial topological characteristics and therefore, many network models are proposed in the literature for generating graphs that are similar to real networks. Network models reproduce nontrivial properties such as long-tail degree distributions or high clustering coefficients. In this context, we encounter the problem of selecting the network model that best fits a given real-world network. The need for a model selection method reveals the network classification problem, in which a target-network is classified into one of the candidate network models. In this paper, we propose a novel network classification method which is independent of the network size and employs an alignment-free metric of network comparison. The proposed method is based on supervised machine learning algorithms and utilizes the topological similarities of networks for the classification task. The experiments show that the proposed method outperforms state-of-the-art methods with respect to classification accuracy, time efficiency, and robustness to noise.

  10. Photometric brown-dwarf classification. I. A method to identify and accurately classify large samples of brown dwarfs without spectroscopy

    NASA Astrophysics Data System (ADS)

    Skrzypek, N.; Warren, S. J.; Faherty, J. K.; Mortlock, D. J.; Burgasser, A. J.; Hewett, P. C.

    2015-02-01

    Aims: We present a method, named photo-type, to identify and accurately classify L and T dwarfs onto the standard spectral classification system using photometry alone. This enables the creation of large and deep homogeneous samples of these objects efficiently, without the need for spectroscopy. Methods: We created a catalogue of point sources with photometry in 8 bands, ranging from 0.75 to 4.6 μm, selected from an area of 3344 deg2, by combining SDSS, UKIDSS LAS, and WISE data. Sources with 13.0 0.8, were then classified by comparison against template colours of quasars, stars, and brown dwarfs. The L and T templates, spectral types L0 to T8, were created by identifying previously known sources with spectroscopic classifications, and fitting polynomial relations between colour and spectral type. Results: Of the 192 known L and T dwarfs with reliable photometry in the surveyed area and magnitude range, 189 are recovered by our selection and classification method. We have quantified the accuracy of the classification method both externally, with spectroscopy, and internally, by creating synthetic catalogues and accounting for the uncertainties. We find that, brighter than J = 17.5, photo-type classifications are accurate to one spectral sub-type, and are therefore competitive with spectroscopic classifications. The resultant catalogue of 1157 L and T dwarfs will be presented in a companion paper.

  11. The process and utility of classification and regression tree methodology in nursing research

    PubMed Central

    Kuhn, Lisa; Page, Karen; Ward, John; Worrall-Carter, Linda

    2014-01-01

    Aim This paper presents a discussion of classification and regression tree analysis and its utility in nursing research. Background Classification and regression tree analysis is an exploratory research method used to illustrate associations between variables not suited to traditional regression analysis. Complex interactions are demonstrated between covariates and variables of interest in inverted tree diagrams. Design Discussion paper. Data sources English language literature was sourced from eBooks, Medline Complete and CINAHL Plus databases, Google and Google Scholar, hard copy research texts and retrieved reference lists for terms including classification and regression tree* and derivatives and recursive partitioning from 1984–2013. Discussion Classification and regression tree analysis is an important method used to identify previously unknown patterns amongst data. Whilst there are several reasons to embrace this method as a means of exploratory quantitative research, issues regarding quality of data as well as the usefulness and validity of the findings should be considered. Implications for Nursing Research Classification and regression tree analysis is a valuable tool to guide nurses to reduce gaps in the application of evidence to practice. With the ever-expanding availability of data, it is important that nurses understand the utility and limitations of the research method. Conclusion Classification and regression tree analysis is an easily interpreted method for modelling interactions between health-related variables that would otherwise remain obscured. Knowledge is presented graphically, providing insightful understanding of complex and hierarchical relationships in an accessible and useful way to nursing and other health professions. PMID:24237048

  12. The process and utility of classification and regression tree methodology in nursing research.

    PubMed

    Kuhn, Lisa; Page, Karen; Ward, John; Worrall-Carter, Linda

    2014-06-01

    This paper presents a discussion of classification and regression tree analysis and its utility in nursing research. Classification and regression tree analysis is an exploratory research method used to illustrate associations between variables not suited to traditional regression analysis. Complex interactions are demonstrated between covariates and variables of interest in inverted tree diagrams. Discussion paper. English language literature was sourced from eBooks, Medline Complete and CINAHL Plus databases, Google and Google Scholar, hard copy research texts and retrieved reference lists for terms including classification and regression tree* and derivatives and recursive partitioning from 1984-2013. Classification and regression tree analysis is an important method used to identify previously unknown patterns amongst data. Whilst there are several reasons to embrace this method as a means of exploratory quantitative research, issues regarding quality of data as well as the usefulness and validity of the findings should be considered. Classification and regression tree analysis is a valuable tool to guide nurses to reduce gaps in the application of evidence to practice. With the ever-expanding availability of data, it is important that nurses understand the utility and limitations of the research method. Classification and regression tree analysis is an easily interpreted method for modelling interactions between health-related variables that would otherwise remain obscured. Knowledge is presented graphically, providing insightful understanding of complex and hierarchical relationships in an accessible and useful way to nursing and other health professions. © 2013 The Authors. Journal of Advanced Nursing Published by John Wiley & Sons Ltd.

  13. Automated classification of Acid Rock Drainage potential from Corescan drill core imagery

    NASA Astrophysics Data System (ADS)

    Cracknell, M. J.; Jackson, L.; Parbhakar-Fox, A.; Savinova, K.

    2017-12-01

    Classification of the acid forming potential of waste rock is important for managing environmental hazards associated with mining operations. Current methods for the classification of acid rock drainage (ARD) potential usually involve labour intensive and subjective assessment of drill core and/or hand specimens. Manual methods are subject to operator bias, human error and the amount of material that can be assessed within a given time frame is limited. The automated classification of ARD potential documented here is based on the ARD Index developed by Parbhakar-Fox et al. (2011). This ARD Index involves the combination of five indicators: A - sulphide content; B - sulphide alteration; C - sulphide morphology; D - primary neutraliser content; and E - sulphide mineral association. Several components of the ARD Index require accurate identification of sulphide minerals. This is achieved by classifying Corescan Red-Green-Blue true colour images into the presence or absence of sulphide minerals using supervised classification. Subsequently, sulphide classification images are processed and combined with Corescan SWIR-based mineral classifications to obtain information on sulphide content, indices representing sulphide textures (disseminated versus massive and degree of veining), and spatially associated minerals. This information is combined to calculate ARD Index indicator values that feed into the classification of ARD potential. Automated ARD potential classifications of drill core samples associated with a porphyry Cu-Au deposit are compared to manually derived classifications and those obtained by standard static geochemical testing and X-ray diffractometry analyses. Results indicate a high degree of similarity between automated and manual ARD potential classifications. Major differences between approaches are observed in sulphide and neutraliser mineral percentages, likely due to the subjective nature of manual estimates of mineral content. The automated approach presented here for the classification of ARD potential offers rapid, repeatable and accurate outcomes comparable to manually derived classifications. Methods for automated ARD classifications from digital drill core data represent a step-change for geoenvironmental management practices in the mining industry.

  14. Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.

    PubMed

    Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel

    2017-06-01

    Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.

  15. Criteria for mitral regurgitation classification were inadequate for dilated cardiomyopathy.

    PubMed

    Mancuso, Frederico José Neves; Moisés, Valdir Ambrosio; Almeida, Dirceu Rodrigues; Oliveira, Wercules Antonio; Poyares, Dalva; Brito, Flavio Souza; Paola, Angelo Amato Vincenzo de; Carvalho, Antonio Carlos Camargo; Campos, Orlando

    2013-11-01

    Mitral regurgitation (MR) is common in patients with dilated cardiomyopathy (DCM). It is unknown whether the criteria for MR classification are inadequate for patients with DCM. We aimed to evaluate the agreement among the four most common echocardiographic methods for MR classification. Ninety patients with DCM were included. Functional MR was classified using four echocardiographic methods: color flow jet area (JA), vena contracta (VC), effective regurgitant orifice area (ERO) and regurgitant volume (RV). MR was classified as mild, moderate or important according to the American Society of Echocardiography criteria and by dividing the values into terciles. The Kappa test was used to evaluate whether the methods agreed, and the Pearson correlation coefficient was used to evaluate the correlation between the absolute values of each method. MR classification according to each method was as follows: JA: 26 mild, 44 moderate, 20 important; VC: 12 mild, 72 moderate, 6 important; ERO: 70 mild, 15 moderate, 5 important; RV: 70 mild, 16 moderate, 4 important. The agreement was poor among methods (kappa=0.11; p<0.001). It was observed a strong correlation between the absolute values of each method, ranging from 0.70 to 0.95 (p<0.01) and the agreement was higher when values were divided into terciles (kappa = 0.44; p < 0.01) CONCLUSION: The use of conventional echocardiographic criteria for MR classification seems inadequate in patients with DCM. It is necessary to establish new cutoff values for MR classification in these patients.

  16. 77 FR 45611 - Federal Acquisition Regulation; Submission for OMB Review; Freight Classification Description

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-01

    ...; Submission for OMB Review; Freight Classification Description AGENCIES: Department of Defense (DOD), General... collection requirement concerning freight classification description. A notice was published in the Federal... Information Collection 9000- 0055, Freight Classification Description, by any of the following methods...

  17. EEG Sleep Stages Classification Based on Time Domain Features and Structural Graph Similarity.

    PubMed

    Diykh, Mohammed; Li, Yan; Wen, Peng

    2016-11-01

    The electroencephalogram (EEG) signals are commonly used in diagnosing and treating sleep disorders. Many existing methods for sleep stages classification mainly depend on the analysis of EEG signals in time or frequency domain to obtain a high classification accuracy. In this paper, the statistical features in time domain, the structural graph similarity and the K-means (SGSKM) are combined to identify six sleep stages using single channel EEG signals. Firstly, each EEG segment is partitioned into sub-segments. The size of a sub-segment is determined empirically. Secondly, statistical features are extracted, sorted into different sets of features and forwarded to the SGSKM to classify EEG sleep stages. We have also investigated the relationships between sleep stages and the time domain features of the EEG data used in this paper. The experimental results show that the proposed method yields better classification results than other four existing methods and the support vector machine (SVM) classifier. A 95.93% average classification accuracy is achieved by using the proposed method.

  18. Motivation Classification and Grade Prediction for MOOCs Learners

    PubMed Central

    Xu, Bin; Yang, Dan

    2016-01-01

    While MOOCs offer educational data on a new scale, many educators find great potential of the big data including detailed activity records of every learner. A learner's behavior such as if a learner will drop out from the course can be predicted. How to provide an effective, economical, and scalable method to detect cheating on tests such as surrogate exam-taker is a challenging problem. In this paper, we present a grade predicting method that uses student activity features to predict whether a learner may get a certification if he/she takes a test. The method consists of two-step classifications: motivation classification (MC) and grade classification (GC). The MC divides all learners into three groups including certification earning, video watching, and course sampling. The GC then predicts a certification earning learner may or may not obtain a certification. Our experiment shows that the proposed method can fit the classification model at a fine scale and it is possible to find a surrogate exam-taker. PMID:26884747

  19. Motivation Classification and Grade Prediction for MOOCs Learners.

    PubMed

    Xu, Bin; Yang, Dan

    2016-01-01

    While MOOCs offer educational data on a new scale, many educators find great potential of the big data including detailed activity records of every learner. A learner's behavior such as if a learner will drop out from the course can be predicted. How to provide an effective, economical, and scalable method to detect cheating on tests such as surrogate exam-taker is a challenging problem. In this paper, we present a grade predicting method that uses student activity features to predict whether a learner may get a certification if he/she takes a test. The method consists of two-step classifications: motivation classification (MC) and grade classification (GC). The MC divides all learners into three groups including certification earning, video watching, and course sampling. The GC then predicts a certification earning learner may or may not obtain a certification. Our experiment shows that the proposed method can fit the classification model at a fine scale and it is possible to find a surrogate exam-taker.

  20. Heterophilic antibody interference affecting multiple hormone assays: Is it due to rheumatoid factor?

    PubMed

    Mongolu, Shiva; Armston, Annie E; Mozley, Erin; Nasruddin, Azraai

    2016-01-01

    Assay interference with heterophilic antibodies has been well described in literature. Rheumatoid factor is known to cause similar interference leading to falsely elevated hormone levels when measured by immunometric methods like enzyme-linked immunosorbent assay (ELISA) or multiplex immunoasays (MIA). We report a case of a 60-year-old male patient with a history of rheumatoid arthritis referred to our endocrine clinic for investigation of hypogonadism and was found to have high serum levels of LH, FSH, SHBG, Prolactin, HCG and TSH. We suspected assay interference and further tests were performed. We used Heteroblock tubes and PEG precipitation to eliminate the interference and the hormone levels post treatment were in the normal range. We believe the interference was caused by high serum levels of rheumatoid factor. Although he was treated with thyroxine for 3 years, we believe he may have been treated inappropriately as his Free T4 level was always normal despite high TSH due to assay interference. Our case illustrates the phenomenon of heterophilic antibody interference likely due to high levels of rheumatoid factor. It is essential for clinicians and endocrinologists in particular to be aware of this possibility when making treatment decisions in these groups of patients.

Top