Sample records for interference classification methods

  1. 7 CFR 27.59 - Postponed classification; interference.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... shall not be allowed to interfere with or delay the classification of other samples previously made... 7 Agriculture 2 2011-01-01 2011-01-01 false Postponed classification; interference. 27.59 Section... CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Postponed...

  2. 7 CFR 27.59 - Postponed classification; interference.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... shall not be allowed to interfere with or delay the classification of other samples previously made... 7 Agriculture 2 2010-01-01 2010-01-01 false Postponed classification; interference. 27.59 Section... CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Postponed...

  3. Memory Reactivation Predicts Resistance to Retroactive Interference: Evidence from Multivariate Classification and Pattern Similarity Analyses

    PubMed Central

    Rugg, Michael D.

    2016-01-01

    Memory reactivation—the reinstatement of processes and representations engaged when an event is initially experienced—is believed to play an important role in strengthening and updating episodic memory. The present study examines how memory reactivation during a potentially interfering event influences memory for a previously experienced event. Participants underwent fMRI during the encoding phase of an AB/AC interference task in which some words were presented twice in association with two different encoding tasks (AB and AC trials) and other words were presented once (DE trials). The later memory test required retrieval of the encoding tasks associated with each of the study words. Retroactive interference was evident for the AB encoding task and was particularly strong when the AC encoding task was remembered rather than forgotten. We used multivariate classification and pattern similarity analysis (PSA) to measure reactivation of the AB encoding task during AC trials. The results demonstrated that reactivation of generic task information measured with multivariate classification predicted subsequent memory for the AB encoding task regardless of whether interference was strong and weak (trials for which the AC encoding task was remembered or forgotten, respectively). In contrast, reactivation of neural patterns idiosyncratic to a given AB trial measured with PSA only predicted memory when the strength of interference was low. These results suggest that reactivation of features of an initial experience shared across numerous events in the same category, but not features idiosyncratic to a particular event, are important in resisting retroactive interference caused by new learning. SIGNIFICANCE STATEMENT Reactivating a previously encoded memory is believed to provide an opportunity to strengthen the memory, but also to return the memory to a labile state, making it susceptible to interference. However, there is debate as to how memory reactivation elicited by

  4. A Real-Time Interference Monitoring Technique for GNSS Based on a Twin Support Vector Machine Method.

    PubMed

    Li, Wutao; Huang, Zhigang; Lang, Rongling; Qin, Honglei; Zhou, Kai; Cao, Yongbin

    2016-03-04

    Interferences can severely degrade the performance of Global Navigation Satellite System (GNSS) receivers. As the first step of GNSS any anti-interference measures, interference monitoring for GNSS is extremely essential and necessary. Since interference monitoring can be considered as a classification problem, a real-time interference monitoring technique based on Twin Support Vector Machine (TWSVM) is proposed in this paper. A TWSVM model is established, and TWSVM is solved by the Least Squares Twin Support Vector Machine (LSTWSVM) algorithm. The interference monitoring indicators are analyzed to extract features from the interfered GNSS signals. The experimental results show that the chosen observations can be used as the interference monitoring indicators. The interference monitoring performance of the proposed method is verified by using GPS L1 C/A code signal and being compared with that of standard SVM. The experimental results indicate that the TWSVM-based interference monitoring is much faster than the conventional SVM. Furthermore, the training time of TWSVM is on millisecond (ms) level and the monitoring time is on microsecond (μs) level, which make the proposed approach usable in practical interference monitoring applications.

  5. A Real-Time Interference Monitoring Technique for GNSS Based on a Twin Support Vector Machine Method

    PubMed Central

    Li, Wutao; Huang, Zhigang; Lang, Rongling; Qin, Honglei; Zhou, Kai; Cao, Yongbin

    2016-01-01

    Interferences can severely degrade the performance of Global Navigation Satellite System (GNSS) receivers. As the first step of GNSS any anti-interference measures, interference monitoring for GNSS is extremely essential and necessary. Since interference monitoring can be considered as a classification problem, a real-time interference monitoring technique based on Twin Support Vector Machine (TWSVM) is proposed in this paper. A TWSVM model is established, and TWSVM is solved by the Least Squares Twin Support Vector Machine (LSTWSVM) algorithm. The interference monitoring indicators are analyzed to extract features from the interfered GNSS signals. The experimental results show that the chosen observations can be used as the interference monitoring indicators. The interference monitoring performance of the proposed method is verified by using GPS L1 C/A code signal and being compared with that of standard SVM. The experimental results indicate that the TWSVM-based interference monitoring is much faster than the conventional SVM. Furthermore, the training time of TWSVM is on millisecond (ms) level and the monitoring time is on microsecond (μs) level, which make the proposed approach usable in practical interference monitoring applications. PMID:26959020

  6. Chlorinated Cyanurates: Method Interferences and Application Implications

    EPA Science Inventory

    Experiments were conducted to investigate method interferences, residual stability, regulated DBP formation, and a water chemistry model associated with the use of Dichlor & Trichlor in drinking water.

  7. 7 CFR 28.35 - Method of classification.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Method of classification. 28.35 Section 28.35 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards... Classification § 28.35 Method of classification. All cotton samples shall be classified on the basis of the...

  8. A Comparison of Two-Group Classification Methods

    ERIC Educational Resources Information Center

    Holden, Jocelyn E.; Finch, W. Holmes; Kelley, Ken

    2011-01-01

    The statistical classification of "N" individuals into "G" mutually exclusive groups when the actual group membership is unknown is common in the social and behavioral sciences. The results of such classification methods often have important consequences. Among the most common methods of statistical classification are linear discriminant analysis,…

  9. A New Reassigned Spectrogram Method in Interference Detection for GNSS Receivers.

    PubMed

    Sun, Kewen; Jin, Tian; Yang, Dongkai

    2015-09-02

    Interference detection is very important for Global Navigation Satellite System (GNSS) receivers. Current work on interference detection in GNSS receivers has mainly focused on time-frequency (TF) analysis techniques, such as spectrogram and Wigner-Ville distribution (WVD), where the spectrogram approach presents the TF resolution trade-off problem, since the analysis window is used, and the WVD method suffers from the very serious cross-term problem, due to its quadratic TF distribution nature. In order to solve the cross-term problem and to preserve good TF resolution in the TF plane at the same time, in this paper, a new TF distribution by using a reassigned spectrogram has been proposed in interference detection for GNSS receivers. This proposed reassigned spectrogram method efficiently combines the elimination of the cross-term provided by the spectrogram itself according to its inherent nature and the improvement of the TF aggregation property achieved by the reassignment method. Moreover, a notch filter has been adopted in interference mitigation for GNSS receivers, where receiver operating characteristics (ROCs) are used as metrics for the characterization of interference mitigation performance. The proposed interference detection method by using a reassigned spectrogram is evaluated by experiments on GPS L1 signals in the disturbing scenarios in comparison to the state-of-the-art TF analysis approaches. The analysis results show that the proposed interference detection technique effectively overcomes the cross-term problem and also keeps good TF localization properties, which has been proven to be valid and effective to enhance the interference Sensors 2015, 15 22168 detection performance; in addition, the adoption of the notch filter in interference mitigation has shown a significant acquisition performance improvement in terms of ROC curves for GNSS receivers in jamming environments.

  10. A New Reassigned Spectrogram Method in Interference Detection for GNSS Receivers

    PubMed Central

    Sun, Kewen; Jin, Tian; Yang, Dongkai

    2015-01-01

    Interference detection is very important for Global Navigation Satellite System (GNSS) receivers. Current work on interference detection in GNSS receivers has mainly focused on time-frequency (TF) analysis techniques, such as spectrogram and Wigner–Ville distribution (WVD), where the spectrogram approach presents the TF resolution trade-off problem, since the analysis window is used, and the WVD method suffers from the very serious cross-term problem, due to its quadratic TF distribution nature. In order to solve the cross-term problem and to preserve good TF resolution in the TF plane at the same time, in this paper, a new TF distribution by using a reassigned spectrogram has been proposed in interference detection for GNSS receivers. This proposed reassigned spectrogram method efficiently combines the elimination of the cross-term provided by the spectrogram itself according to its inherent nature and the improvement of the TF aggregation property achieved by the reassignment method. Moreover, a notch filter has been adopted in interference mitigation for GNSS receivers, where receiver operating characteristics (ROCs) are used as metrics for the characterization of interference mitigation performance. The proposed interference detection method by using a reassigned spectrogram is evaluated by experiments on GPS L1 signals in the disturbing scenarios in comparison to the state-of-the-art TF analysis approaches. The analysis results show that the proposed interference detection technique effectively overcomes the cross-term problem and also keeps good TF localization properties, which has been proven to be valid and effective to enhance the interference detection performance; in addition, the adoption of the notch filter in interference mitigation has shown a significant acquisition performance improvement in terms of ROC curves for GNSS receivers in jamming environments. PMID:26364637

  11. Proactive control of proactive interference using the method of loci.

    PubMed

    Bass, Willa S; Oswald, Karl M

    2014-01-01

    Proactive interferencebuilds up with exposure to multiple lists of similar items with a resulting reduction in recall. This study examined the effectiveness of using a proactive strategy of the method of loci to reduce proactive interference in a list recall paradigm of categorically similar words. While all participants reported using some form of strategy to recall list words, this study demonstrated that young adults were able to proactively use the method of loci after 25 min of instruction to reduce proactive interference as compared with other personal spontaneous strategies. The implications of this study are that top-down proactive strategies such as the method of loci can significantly reduce proactive interference, and that the use of image and sequence or location are especially useful in this regard.

  12. Detection and severity classification of extracardiac interference in {sup 82}Rb PET myocardial perfusion imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orton, Elizabeth J., E-mail: eorton@physics.carleton.ca; Kemp, Robert A. de; Glenn Wells, R.

    2014-10-15

    Purpose: Myocardial perfusion imaging (MPI) is used for diagnosis and prognosis of coronary artery disease. When MPI studies are performed with positron emission tomography (PET) and the radioactive tracer rubidium-82 chloride ({sup 82}Rb), a small but non-negligible fraction of studies (∼10%) suffer from extracardiac interference: high levels of tracer uptake in structures adjacent to the heart which mask the true cardiac tracer uptake. At present, there are no clinically available options for automated detection or correction of this problem. This work presents an algorithm that detects and classifies the severity of extracardiac interference in {sup 82}Rb PET MPI images andmore » reports the accuracy and failure rate of the method. Methods: A set of 200 {sup 82}Rb PET MPI images were reviewed by a trained nuclear cardiologist and interference severity reported on a four-class scale, from absent to severe. An automated algorithm was developed that compares uptake at the external border of the myocardium to three thresholds, separating the four interference severity classes. A minimum area of interference was required, and the search region was limited to that facing the stomach wall and spleen. Maximizing concordance (Cohen’s Kappa) and minimizing failure rate for the set of 200 clinician-read images were used to find the optimal population-based constants defining search limit and minimum area parameters and the thresholds for the algorithm. Tenfold stratified cross-validation was used to find optimal thresholds and report accuracy measures (sensitivity, specificity, and Kappa). Results: The algorithm was capable of detecting interference with a mean [95% confidence interval] sensitivity/specificity/Kappa of 0.97 [0.94, 1.00]/0.82 [0.66, 0.98]/0.79 [0.65, 0.92], and a failure rate of 1.0% ± 0.2%. The four-class overall Kappa was 0.72 [0.64, 0.81]. Separation of mild versus moderate-or-greater interference was performed with good accuracy (sensitivity

  13. Method of managing interference during delay recovery on a train system

    DOEpatents

    Gordon, Susanna P.; Evans, John A.

    2005-12-27

    The present invention provides methods for preventing low train voltages and managing interference, thereby improving the efficiency, reliability, and passenger comfort associated with commuter trains. An algorithm implementing neural network technology is used to predict low voltages before they occur. Once voltages are predicted, then multiple trains can be controlled to prevent low voltage events. Further, algorithms for managing inference are presented in the present invention. Different types of interference problems are addressed in the present invention such as "Interference During Acceleration", "Interference Near Station Stops", and "Interference During Delay Recovery." Managing such interference avoids unnecessary brake/acceleration cycles during acceleration, immediately before station stops, and after substantial delays. Algorithms are demonstrated to avoid oscillatory brake/acceleration cycles due to interference and to smooth the trajectories of closely following trains. This is achieved by maintaining sufficient following distances to avoid unnecessary braking/accelerating. These methods generate smooth train trajectories, making for a more comfortable ride, and improve train motor reliability by avoiding unnecessary mode-changes between propulsion and braking. These algorithms can also have a favorable impact on traction power system requirements and energy consumption.

  14. Proactive control of proactive interference using the method of loci

    PubMed Central

    Bass, Willa S.; Oswald, Karl M.

    2014-01-01

    Proactive interferencebuilds up with exposure to multiple lists of similar items with a resulting reduction in recall. This study examined the effectiveness of using a proactive strategy of the method of loci to reduce proactive interference in a list recall paradigm of categorically similar words. While all participants reported using some form of strategy to recall list words, this study demonstrated that young adults were able to proactively use the method of loci after 25 min of instruction to reduce proactive interference as compared with other personal spontaneous strategies. The implications of this study are that top-down proactive strategies such as the method of loci can significantly reduce proactive interference, and that the use of image and sequence or location are especially useful in this regard. PMID:25157300

  15. Instructional Method Classifications Lack User Language and Orientation

    ERIC Educational Resources Information Center

    Neumann, Susanne; Koper, Rob

    2010-01-01

    Following publications emphasizing the need of a taxonomy for instructional methods, this article presents a literature review on classifications for learning and teaching in order to identify possible classifications for instructional methods. Data was collected for 37 classifications capturing the origins, theoretical underpinnings, purposes and…

  16. An Improved Time-Frequency Analysis Method in Interference Detection for GNSS Receivers

    PubMed Central

    Sun, Kewen; Jin, Tian; Yang, Dongkai

    2015-01-01

    In this paper, an improved joint time-frequency (TF) analysis method based on a reassigned smoothed pseudo Wigner–Ville distribution (RSPWVD) has been proposed in interference detection for Global Navigation Satellite System (GNSS) receivers. In the RSPWVD, the two-dimensional low-pass filtering smoothing function is introduced to eliminate the cross-terms present in the quadratic TF distribution, and at the same time, the reassignment method is adopted to improve the TF concentration properties of the auto-terms of the signal components. This proposed interference detection method is evaluated by experiments on GPS L1 signals in the disturbing scenarios compared to the state-of-the-art interference detection approaches. The analysis results show that the proposed interference detection technique effectively overcomes the cross-terms problem and also preserves good TF localization properties, which has been proven to be effective and valid to enhance the interference detection performance of the GNSS receivers, particularly in the jamming environments. PMID:25905704

  17. Some new classification methods for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia

    2006-10-01

    Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.

  18. Bacterial cell identification in differential interference contrast microscopy images.

    PubMed

    Obara, Boguslaw; Roberts, Mark A J; Armitage, Judith P; Grau, Vicente

    2013-04-23

    Microscopy image segmentation lays the foundation for shape analysis, motion tracking, and classification of biological objects. Despite its importance, automated segmentation remains challenging for several widely used non-fluorescence, interference-based microscopy imaging modalities. For example in differential interference contrast microscopy which plays an important role in modern bacterial cell biology. Therefore, new revolutions in the field require the development of tools, technologies and work-flows to extract and exploit information from interference-based imaging data so as to achieve new fundamental biological insights and understanding. We have developed and evaluated a high-throughput image analysis and processing approach to detect and characterize bacterial cells and chemotaxis proteins. Its performance was evaluated using differential interference contrast and fluorescence microscopy images of Rhodobacter sphaeroides. Results demonstrate that the proposed approach provides a fast and robust method for detection and analysis of spatial relationship between bacterial cells and their chemotaxis proteins.

  19. Subspace-based interference removal methods for a multichannel biomagnetic sensor array.

    PubMed

    Sekihara, Kensuke; Nagarajan, Srikantan S

    2017-10-01

    In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.

  20. Subspace-based interference removal methods for a multichannel biomagnetic sensor array

    NASA Astrophysics Data System (ADS)

    Sekihara, Kensuke; Nagarajan, Srikantan S.

    2017-10-01

    Objective. In biomagnetic signal processing, the theory of the signal subspace has been applied to removing interfering magnetic fields, and a representative algorithm is the signal space projection algorithm, in which the signal/interference subspace is defined in the spatial domain as the span of signal/interference-source lead field vectors. This paper extends the notion of this conventional (spatial domain) signal subspace by introducing a new definition of signal subspace in the time domain. Approach. It defines the time-domain signal subspace as the span of row vectors that contain the source time course values. This definition leads to symmetric relationships between the time-domain and the conventional (spatial-domain) signal subspaces. As a review, this article shows that the notion of the time-domain signal subspace provides useful insights over existing interference removal methods from a unified perspective. Main results and significance. Using the time-domain signal subspace, it is possible to interpret a number of interference removal methods as the time domain signal space projection. Such methods include adaptive noise canceling, sensor noise suppression, the common temporal subspace projection, the spatio-temporal signal space separation, and the recently-proposed dual signal subspace projection. Our analysis using the notion of the time domain signal space projection reveals implicit assumptions these methods rely on, and shows that the difference between these methods results only from the manner of deriving the interference subspace. Numerical examples that illustrate the results of our arguments are provided.

  1. Quasiparticle Interference Studies of Quantum Materials.

    PubMed

    Avraham, Nurit; Reiner, Jonathan; Kumar-Nayak, Abhay; Morali, Noam; Batabyal, Rajib; Yan, Binghai; Beidenkopf, Haim

    2018-06-03

    Exotic electronic states are realized in novel quantum materials. This field is revolutionized by the topological classification of materials. Such compounds necessarily host unique states on their boundaries. Scanning tunneling microscopy studies of these surface states have provided a wealth of spectroscopic characterization, with the successful cooperation of ab initio calculations. The method of quasiparticle interference imaging proves to be particularly useful for probing the dispersion relation of the surface bands. Herein, how a variety of additional fundamental electronic properties can be probed via this method is reviewed. It is demonstrated how quasiparticle interference measurements entail mesoscopic size quantization and the electronic phase coherence in semiconducting nanowires; helical spin protection and energy-momentum fluctuations in a topological insulator; and the structure of the Bloch wave function and the relative insusceptibility of topological electronic states to surface potential in a topological Weyl semimetal. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Spatiotemporal signal space separation method for rejecting nearby interference in MEG measurements

    NASA Astrophysics Data System (ADS)

    Taulu, S.; Simola, J.

    2006-04-01

    Limitations of traditional magnetoencephalography (MEG) exclude some important patient groups from MEG examinations, such as epilepsy patients with a vagus nerve stimulator, patients with magnetic particles on the head or having magnetic dental materials that cause severe movement-related artefact signals. Conventional interference rejection methods are not able to remove the artefacts originating this close to the MEG sensor array. For example, the reference array method is unable to suppress interference generated by sources closer to the sensors than the reference array, about 20-40 cm. The spatiotemporal signal space separation method proposed in this paper recognizes and removes both external interference and the artefacts produced by these nearby sources, even on the scalp. First, the basic separation into brain-related and external interference signals is accomplished with signal space separation based on sensor geometry and Maxwell's equations only. After this, the artefacts from nearby sources are extracted by a simple statistical analysis in the time domain, and projected out. Practical examples with artificial current dipoles and interference sources as well as data from real patients demonstrate that the method removes the artefacts without altering the field patterns of the brain signals.

  3. Weak scratch detection and defect classification methods for a large-aperture optical element

    NASA Astrophysics Data System (ADS)

    Tao, Xian; Xu, De; Zhang, Zheng-Tao; Zhang, Feng; Liu, Xi-Long; Zhang, Da-Peng

    2017-03-01

    Surface defects on optics cause optic failure and heavy loss to the optical system. Therefore, surface defects on optics must be carefully inspected. This paper proposes a coarse-to-fine detection strategy of weak scratches in complicated dark-field images. First, all possible scratches are detected based on bionic vision. Then, each possible scratch is precisely positioned and connected to a complete scratch by the LSD and a priori knowledge. Finally, multiple scratches with various types can be detected in dark-field images. To classify defects and pollutants, a classification method based on GIST features is proposed. This paper uses many real dark-field images as experimental images. The results show that this method can detect multiple types of weak scratches in complex images and that the defects can be correctly distinguished with interference. This method satisfies the real-time and accurate detection requirements of surface defects.

  4. Adaptive phase k-means algorithm for waveform classification

    NASA Astrophysics Data System (ADS)

    Song, Chengyun; Liu, Zhining; Wang, Yaojun; Xu, Feng; Li, Xingming; Hu, Guangmin

    2018-01-01

    Waveform classification is a powerful technique for seismic facies analysis that describes the heterogeneity and compartments within a reservoir. Horizon interpretation is a critical step in waveform classification. However, the horizon often produces inconsistent waveform phase, and thus results in an unsatisfied classification. To alleviate this problem, an adaptive phase waveform classification method called the adaptive phase k-means is introduced in this paper. Our method improves the traditional k-means algorithm using an adaptive phase distance for waveform similarity measure. The proposed distance is a measure with variable phases as it moves from sample to sample along the traces. Model traces are also updated with the best phase interference in the iterative process. Therefore, our method is robust to phase variations caused by the interpretation horizon. We tested the effectiveness of our algorithm by applying it to synthetic and real data. The satisfactory results reveal that the proposed method tolerates certain waveform phase variation and is a good tool for seismic facies analysis.

  5. Impact of IRT item misfit on score estimates and severity classifications: an examination of PROMIS depression and pain interference item banks.

    PubMed

    Zhao, Yue

    2017-03-01

    In patient-reported outcome research that utilizes item response theory (IRT), using statistical significance tests to detect misfit is usually the focus of IRT model-data fit evaluations. However, such evaluations rarely address the impact/consequence of using misfitting items on the intended clinical applications. This study was designed to evaluate the impact of IRT item misfit on score estimates and severity classifications and to demonstrate a recommended process of model-fit evaluation. Using secondary data sources collected from the Patient-Reported Outcome Measurement Information System (PROMIS) wave 1 testing phase, analyses were conducted based on PROMIS depression (28 items; 782 cases) and pain interference (41 items; 845 cases) item banks. The identification of misfitting items was assessed using Orlando and Thissen's summed-score item-fit statistics and graphical displays. The impact of misfit was evaluated according to the agreement of both IRT-derived T-scores and severity classifications between inclusion and exclusion of misfitting items. The examination of the presence and impact of misfit suggested that item misfit had a negligible impact on the T-score estimates and severity classifications with the general population sample in the PROMIS depression and pain interference item banks, implying that the impact of item misfit was insignificant. Findings support the T-score estimates in the two item banks as robust against item misfit at both the group and individual levels and add confidence to the use of T-scores for severity diagnosis in the studied sample. Recommendations on approaches for identifying item misfit (statistical significance) and assessing the misfit impact (practical significance) are given.

  6. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 2 2013-01-01 2013-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  7. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  8. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 2 2012-01-01 2012-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  9. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  10. 7 CFR 28.179 - Methods of cotton classification and comparison.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 2 2014-01-01 2014-01-01 false Methods of cotton classification and comparison. 28... STANDARD CONTAINER REGULATIONS COTTON CLASSING, TESTING, AND STANDARDS Classification for Foreign Growth Cotton § 28.179 Methods of cotton classification and comparison. The classification of samples from...

  11. Hierarchic Agglomerative Clustering Methods for Automatic Document Classification.

    ERIC Educational Resources Information Center

    Griffiths, Alan; And Others

    1984-01-01

    Considers classifications produced by application of single linkage, complete linkage, group average, and word clustering methods to Keen and Cranfield document test collections, and studies structure of hierarchies produced, extent to which methods distort input similarity matrices during classification generation, and retrieval effectiveness…

  12. Advanced Steel Microstructural Classification by Deep Learning Methods.

    PubMed

    Azimi, Seyed Majid; Britz, Dominik; Engstler, Michael; Fritz, Mario; Mücklich, Frank

    2018-02-01

    The inner structure of a material is called microstructure. It stores the genesis of a material and determines all its physical and chemical properties. While microstructural characterization is widely spread and well known, the microstructural classification is mostly done manually by human experts, which gives rise to uncertainties due to subjectivity. Since the microstructure could be a combination of different phases or constituents with complex substructures its automatic classification is very challenging and only a few prior studies exist. Prior works focused on designed and engineered features by experts and classified microstructures separately from the feature extraction step. Recently, Deep Learning methods have shown strong performance in vision applications by learning the features from data together with the classification step. In this work, we propose a Deep Learning method for microstructural classification in the examples of certain microstructural constituents of low carbon steel. This novel method employs pixel-wise segmentation via Fully Convolutional Neural Network (FCNN) accompanied by a max-voting scheme. Our system achieves 93.94% classification accuracy, drastically outperforming the state-of-the-art method of 48.89% accuracy. Beyond the strong performance of our method, this line of research offers a more robust and first of all objective way for the difficult task of steel quality appreciation.

  13. A new pre-classification method based on associative matching method

    NASA Astrophysics Data System (ADS)

    Katsuyama, Yutaka; Minagawa, Akihiro; Hotta, Yoshinobu; Omachi, Shinichiro; Kato, Nei

    2010-01-01

    Reducing the time complexity of character matching is critical to the development of efficient Japanese Optical Character Recognition (OCR) systems. To shorten processing time, recognition is usually split into separate preclassification and recognition stages. For high overall recognition performance, the pre-classification stage must both have very high classification accuracy and return only a small number of putative character categories for further processing. Furthermore, for any practical system, the speed of the pre-classification stage is also critical. The associative matching (AM) method has often been used for fast pre-classification, because its use of a hash table and reliance solely on logical bit operations to select categories makes it highly efficient. However, redundant certain level of redundancy exists in the hash table because it is constructed using only the minimum and maximum values of the data on each axis and therefore does not take account of the distribution of the data. We propose a modified associative matching method that satisfies the performance criteria described above but in a fraction of the time by modifying the hash table to reflect the underlying distribution of training characters. Furthermore, we show that our approach outperforms pre-classification by clustering, ANN and conventional AM in terms of classification accuracy, discriminative power and speed. Compared to conventional associative matching, the proposed approach results in a 47% reduction in total processing time across an evaluation test set comprising 116,528 Japanese character images.

  14. Ramsey method for Auger-electron interference induced by an attosecond twin pulse

    NASA Astrophysics Data System (ADS)

    Buth, Christian; Schafer, Kenneth J.

    2015-02-01

    We examine the archetype of an interference experiment for Auger electrons: two electron wave packets are launched by inner-shell ionizing a krypton atom using two attosecond light pulses with a variable time delay. This setting is an attosecond realization of the Ramsey method of separated oscillatory fields. Interference of the two ejected Auger-electron wave packets is predicted, indicating that the coherence between the two pulses is passed to the Auger electrons. For the detection of the interference pattern an accurate coincidence measurement of photo- and Auger electrons is necessary. The method allows one to control inner-shell electron dynamics on an attosecond timescale and represents a sensitive indicator for decoherence.

  15. Interference coupling analysis based on a hybrid method: application to a radio telescope system

    NASA Astrophysics Data System (ADS)

    Xu, Qing-Lin; Qiu, Yang; Tian, Jin; Liu, Qi

    2018-02-01

    Working in a way that passively receives electromagnetic radiation from a celestial body, a radio telescope can be easily disturbed by external radio frequency interference as well as electromagnetic interference generated by electric and electronic components operating at the telescope site. A quantitative analysis of these interferences must be taken into account carefully for further electromagnetic protection of the radio telescope. In this paper, based on electromagnetic topology theory, a hybrid method that combines the Baum-Liu-Tesche (BLT) equation and transfer function is proposed. In this method, the coupling path of the radio telescope is divided into strong coupling and weak coupling sub-paths, and the coupling intensity criterion is proposed by analyzing the conditions in which the BLT equation simplifies to a transfer function. According to the coupling intensity criterion, the topological model of a typical radio telescope system is established. The proposed method is used to solve the interference response of the radio telescope system by analyzing subsystems with different coupling modes separately and then integrating the responses of the subsystems as the response of the entire system. The validity of the proposed method is verified numerically. The results indicate that the proposed method, compared with the direct solving method, reduces the difficulty and improves the efficiency of interference prediction.

  16. Dialect Interference in Lexical Processing: Effects of Familiarity and Social Stereotypes.

    PubMed

    Clopper, Cynthia

    2017-01-01

    The current study explored the roles of dialect familiarityand social stereotypes in dialect interference effects in a speeded lexical classification task. Listeners classified the words bad and bed or had and head produced by local Midland and non-local Northern talkers and the words sod and side or rod and ride produced by non-local, non-stereotyped Northern and nonlocal, stereotyped Southern talkers in single- and mixed-talker blocks. Lexical classification was better for the local dialect than for the non-local dialects, and for the stereotyped non-local dialect than for the non-stereotyped non-local dialect. Dialect interference effects were observed for all three dialects, although the patterns of interference differed. For the local dialect, dialect interference was observed for response times, whereas for the non-local dialects, dialect interference was observed primarily for accuracy. These findings reveal complex interactions between indexical and lexical information in speech processing. © 2016 S. Karger AG, Basel.

  17. Lagrangian methods of cosmic web classification

    NASA Astrophysics Data System (ADS)

    Fisher, J. D.; Faltenbacher, A.; Johnson, M. S. T.

    2016-05-01

    The cosmic web defines the large-scale distribution of matter we see in the Universe today. Classifying the cosmic web into voids, sheets, filaments and nodes allows one to explore structure formation and the role environmental factors have on halo and galaxy properties. While existing studies of cosmic web classification concentrate on grid-based methods, this work explores a Lagrangian approach where the V-web algorithm proposed by Hoffman et al. is implemented with techniques borrowed from smoothed particle hydrodynamics. The Lagrangian approach allows one to classify individual objects (e.g. particles or haloes) based on properties of their nearest neighbours in an adaptive manner. It can be applied directly to a halo sample which dramatically reduces computational cost and potentially allows an application of this classification scheme to observed galaxy samples. Finally, the Lagrangian nature admits a straightforward inclusion of the Hubble flow negating the necessity of a visually defined threshold value which is commonly employed by grid-based classification methods.

  18. [Research on the method of interference correction for nondispersive infrared multi-component gas analysis].

    PubMed

    Sun, You-Wen; Liu, Wen-Qing; Wang, Shi-Mei; Huang, Shu-Hua; Yu, Xiao-Man

    2011-10-01

    A method of interference correction for nondispersive infrared multi-component gas analysis was described. According to the successive integral gas absorption models and methods, the influence of temperature and air pressure on the integral line strengths and linetype was considered, and based on Lorentz detuning linetypes, the absorption cross sections and response coefficients of H2O, CO2, CO, and NO on each filter channel were obtained. The four dimension linear regression equations for interference correction were established by response coefficients, the absorption cross interference was corrected by solving the multi-dimensional linear regression equations, and after interference correction, the pure absorbance signal on each filter channel was only controlled by the corresponding target gas concentration. When the sample cell was filled with gas mixture with a certain concentration proportion of CO, NO and CO2, the pure absorbance after interference correction was used for concentration inversion, the inversion concentration error for CO2 is 2.0%, the inversion concentration error for CO is 1.6%, and the inversion concentration error for NO is 1.7%. Both the theory and experiment prove that the interference correction method proposed for NDIR multi-component gas analysis is feasible.

  19. Combined target factor analysis and Bayesian soft-classification of interference-contaminated samples: forensic fire debris analysis.

    PubMed

    Williams, Mary R; Sigman, Michael E; Lewis, Jennifer; Pitan, Kelly McHugh

    2012-10-10

    A bayesian soft classification method combined with target factor analysis (TFA) is described and tested for the analysis of fire debris data. The method relies on analysis of the average mass spectrum across the chromatographic profile (i.e., the total ion spectrum, TIS) from multiple samples taken from a single fire scene. A library of TIS from reference ignitable liquids with assigned ASTM classification is used as the target factors in TFA. The class-conditional distributions of correlations between the target and predicted factors for each ASTM class are represented by kernel functions and analyzed by bayesian decision theory. The soft classification approach assists in assessing the probability that ignitable liquid residue from a specific ASTM E1618 class, is present in a set of samples from a single fire scene, even in the presence of unspecified background contributions from pyrolysis products. The method is demonstrated with sample data sets and then tested on laboratory-scale burn data and large-scale field test burns. The overall performance achieved in laboratory and field test of the method is approximately 80% correct classification of fire debris samples. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  20. Forms of History Textbook Interference

    ERIC Educational Resources Information Center

    Blaauw, Jan

    2017-01-01

    History textbooks in contemporary democracies have often been exposed to censorship and other forms of interference. This article presents the idea of a classification of these forms as a novel way to contemplate the ambivalent relationship between democratic authority and historical instruction. The model primarily distinguishes official forms of…

  1. Global Optimization Ensemble Model for Classification Methods

    PubMed Central

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  2. Laser interference effect evaluation method based on character of laser-spot and image feature

    NASA Astrophysics Data System (ADS)

    Tang, Jianfeng; Luo, Xiaolin; Wu, Lingxia

    2016-10-01

    Evaluating the laser interference effect to CCD objectively and accurately has great research value. Starting from the change of the image's feature before and after interference, meanwhile, considering the influence of the laser-spot distribution character on the masking degree of the image feature information, a laser interference effect evaluation method based on character of laser-spot and image feature was proposed. It reflected the laser-spot distribution character using the distance between the center of the laser-spot and center of the target. It reflected the change of the global image feature using the changes of image's sparse coefficient matrix, which was obtained by the SSIM-inspired orthogonal matching pursuit (OMP) sparse coding algorithm. What's more, the assessment method reflected the change of the local image feature using the changes of the image's edge sharpness, which could be obtained by the change of the image's gradient magnitude. Taken together, the laser interference effect can be evaluated accurately. In terms of the laser interference experiment results, the proposed method shows good rationality and feasibility under the disturbing condition of different laser powers, and it can also overcome the inaccuracy caused by the change of the laser-spot position, realizing the evaluation of the laser interference effect objectively and accurately.

  3. A classification scheme for risk assessment methods.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stamp, Jason Edwin; Campbell, Philip LaRoche

    2004-08-01

    This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that amore » method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In Section 2 we provide context for

  4. An efficient ensemble learning method for gene microarray classification.

    PubMed

    Osareh, Alireza; Shadgar, Bita

    2013-01-01

    The gene microarray analysis and classification have demonstrated an effective way for the effective diagnosis of diseases and cancers. However, it has been also revealed that the basic classification techniques have intrinsic drawbacks in achieving accurate gene classification and cancer diagnosis. On the other hand, classifier ensembles have received increasing attention in various applications. Here, we address the gene classification issue using RotBoost ensemble methodology. This method is a combination of Rotation Forest and AdaBoost techniques which in turn preserve both desirable features of an ensemble architecture, that is, accuracy and diversity. To select a concise subset of informative genes, 5 different feature selection algorithms are considered. To assess the efficiency of the RotBoost, other nonensemble/ensemble techniques including Decision Trees, Support Vector Machines, Rotation Forest, AdaBoost, and Bagging are also deployed. Experimental results have revealed that the combination of the fast correlation-based feature selection method with ICA-based RotBoost ensemble is highly effective for gene classification. In fact, the proposed method can create ensemble classifiers which outperform not only the classifiers produced by the conventional machine learning but also the classifiers generated by two widely used conventional ensemble learning methods, that is, Bagging and AdaBoost.

  5. Thin-film thickness measurement method based on the reflection interference spectrum

    NASA Astrophysics Data System (ADS)

    Jiang, Li Na; Feng, Gao; Shu, Zhang

    2012-09-01

    A method is introduced to measure the thin-film thickness, refractive index and other optical constants. When a beam of white light shines on the surface of the sample film, the reflected lights of the upper and the lower surface of the thin-film will interfere with each other and reflectivity of the film will fluctuate with light wavelength. The reflection interference spectrum is analyzed with software according to the database, while the thickness and refractive index of the thin-film is measured.

  6. A method for testing the spectraltransmittance of infrared smoke interference

    NASA Astrophysics Data System (ADS)

    Lei, Hao; Zhang, Yazhou; Wang, Guangping; Wu, Jingli

    2018-02-01

    Infrared smoke is mainly used for shielding, blind, deception and recognition on the battlefield. The traditional shelter smoke is mainly placed in the friendly positions or positions between the friendly positions and enemy positions, to reduce the enemy observation post investigative capacity. The passive interference capability of the smoke depends on the infrared extinction ability of the smoke. The infrared transmittance test is an objective and accurate representation of the extinction ability of the smoke. In this paper, a method for testing the spectral transmittance of infrared smoke interference is introduced. The uncertainty of the measurement results is analyzed. The results show that this method can effectively obtain the spectral transmittance of the infrared smoke and uncertainty of the measurement is 7.16%, which can be effective for the smoke detection, smoke composition analysis, screening effect evaluation to provide test parameters support.

  7. Compensation method of cloud infrared radiation interference based on a spinning projectile's attitude measurement

    NASA Astrophysics Data System (ADS)

    Xu, Miaomiao; Bu, Xiongzhu; Yu, Jing; He, Zilu

    2018-01-01

    Based on the study of earth infrared radiation and further requirement of anticloud interference ability for a spinning projectile's infrared attitude measurement, a compensation method of cloud infrared radiation interference is proposed. First, the theoretical model of infrared radiation interference is established by analyzing the generation mechanism and interference characteristics of cloud infrared radiation. Then, the influence of cloud infrared radiation on attitude angle is calculated in the following two situations. The first situation is the projectile in cloud, and the maximum of roll angle error can reach ± 20 deg. The second situation is the projectile outside of cloud, and it results in the inability to measure the projectile's attitude angle. Finally, a multisensor weighted fusion algorithm is proposed based on trust function method to reduce the influence of cloud infrared radiation. The results of semiphysical experiments show that the error of roll angle with a weighted fusion algorithm can be kept within ± 0.5 deg in the presence of cloud infrared radiation interference. This proposed method improves the accuracy of roll angle by nearly four times in attitude measurement and also solves the problem of low accuracy of infrared radiation attitude measurement in navigation and guidance field.

  8. Conditions for quantum interference in cognitive sciences.

    PubMed

    Yukalov, Vyacheslav I; Sornette, Didier

    2014-01-01

    We present a general classification of the conditions under which cognitive science, concerned, e.g. with decision making, requires the use of quantum theoretical notions. The analysis is done in the frame of the mathematical approach based on the theory of quantum measurements. We stress that quantum effects in cognition can arise only when decisions are made under uncertainty. Conditions for the appearance of quantum interference in cognitive sciences and the conditions when interference cannot arise are formulated. Copyright © 2013 Cognitive Science Society, Inc.

  9. Comparing K-mer based methods for improved classification of 16S sequences.

    PubMed

    Vinje, Hilde; Liland, Kristian Hovde; Almøy, Trygve; Snipen, Lars

    2015-07-01

    The need for precise and stable taxonomic classification is highly relevant in modern microbiology. Parallel to the explosion in the amount of sequence data accessible, there has also been a shift in focus for classification methods. Previously, alignment-based methods were the most applicable tools. Now, methods based on counting K-mers by sliding windows are the most interesting classification approach with respect to both speed and accuracy. Here, we present a systematic comparison on five different K-mer based classification methods for the 16S rRNA gene. The methods differ from each other both in data usage and modelling strategies. We have based our study on the commonly known and well-used naïve Bayes classifier from the RDP project, and four other methods were implemented and tested on two different data sets, on full-length sequences as well as fragments of typical read-length. The difference in classification error obtained by the methods seemed to be small, but they were stable and for both data sets tested. The Preprocessed nearest-neighbour (PLSNN) method performed best for full-length 16S rRNA sequences, significantly better than the naïve Bayes RDP method. On fragmented sequences the naïve Bayes Multinomial method performed best, significantly better than all other methods. For both data sets explored, and on both full-length and fragmented sequences, all the five methods reached an error-plateau. We conclude that no K-mer based method is universally best for classifying both full-length sequences and fragments (reads). All methods approach an error plateau indicating improved training data is needed to improve classification from here. Classification errors occur most frequent for genera with few sequences present. For improving the taxonomy and testing new classification methods, the need for a better and more universal and robust training data set is crucial.

  10. Differential development of retroactive and proactive interference during post-learning wakefulness.

    PubMed

    Brawn, Timothy P; Nusbaum, Howard C; Margoliash, Daniel

    2018-07-01

    Newly encoded, labile memories are prone to disruption during post-learning wakefulness. Here we examine the contributions of retroactive and proactive interference to daytime forgetting on an auditory classification task in a songbird. While both types of interference impair performance, they do not develop concurrently. The retroactive interference of task-B on task-A developed during the learning of task-B, whereas the proactive interference of task-A on task-B emerged during subsequent waking retention. These different time courses indicate an asymmetry in the emergence of retroactive and proactive interference and suggest a mechanistic framework for how different types of interference between new memories develop. © 2018 Brawn et al.; Published by Cold Spring Harbor Laboratory Press.

  11. [Classification of local anesthesia methods].

    PubMed

    Petricas, A Zh; Medvedev, D V; Olkhovskaya, E B

    The traditional classification methods of dental local anesthesia must be modified. In this paper we proved that the vascular mechanism is leading component of spongy injection. It is necessary to take into account the high effectiveness and relative safety of spongy anesthesia, as well as versatility, ease of implementation and the growing prevalence in the world. The essence of the proposed modification is to distinguish the methods in diffusive (including surface anesthesia, infiltration and conductive anesthesia) and vascular-diffusive (including intraosseous, intraligamentary, intraseptal and intrapulpal anesthesia). For the last four methods the common term «spongy (intraosseous) anesthesia» may be used.

  12. The phase interrogation method for optical fiber sensor by analyzing the fork interference pattern

    NASA Astrophysics Data System (ADS)

    Lv, Riqing; Qiu, Liqiang; Hu, Haifeng; Meng, Lu; Zhang, Yong

    2018-02-01

    The phase interrogation method for optical fiber sensor is proposed based on the fork interference pattern between the orbital angular momentum beam and plane wave. The variation of interference pattern with phase difference between the two light beams is investigated to realize the phase interrogation. By employing principal component analysis method, the features of the interference pattern can be extracted. Moreover, the experimental system is designed to verify the theoretical analysis, as well as feasibility of phase interrogation. In this work, the Mach-Zehnder interferometer was employed to convert the strain applied on sensing fiber to the phase difference between the reference and measuring paths. This interrogation method is also applicable for the measurements of other physical parameters, which can produce the phase delay in optical fiber. The performance of the system can be further improved by employing highlysensitive materials and fiber structures.

  13. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System.

    PubMed

    de Moura, Karina de O A; Balbinot, Alexandre

    2018-05-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method

  14. a Hyperspectral Image Classification Method Using Isomap and Rvm

    NASA Astrophysics Data System (ADS)

    Chang, H.; Wang, T.; Fang, H.; Su, Y.

    2018-04-01

    Classification is one of the most significant applications of hyperspectral image processing and even remote sensing. Though various algorithms have been proposed to implement and improve this application, there are still drawbacks in traditional classification methods. Thus further investigations on some aspects, such as dimension reduction, data mining, and rational use of spatial information, should be developed. In this paper, we used a widely utilized global manifold learning approach, isometric feature mapping (ISOMAP), to address the intrinsic nonlinearities of hyperspectral image for dimension reduction. Considering the impropriety of Euclidean distance in spectral measurement, we applied spectral angle (SA) for substitute when constructed the neighbourhood graph. Then, relevance vector machines (RVM) was introduced to implement classification instead of support vector machines (SVM) for simplicity, generalization and sparsity. Therefore, a probability result could be obtained rather than a less convincing binary result. Moreover, taking into account the spatial information of the hyperspectral image, we employ a spatial vector formed by different classes' ratios around the pixel. At last, we combined the probability results and spatial factors with a criterion to decide the final classification result. To verify the proposed method, we have implemented multiple experiments with standard hyperspectral images compared with some other methods. The results and different evaluation indexes illustrated the effectiveness of our method.

  15. Aerodynamic interference effects on tilting proprotor aircraft. [using the Green function method

    NASA Technical Reports Server (NTRS)

    Soohoo, P.; Morino, L.; Noll, R. B.; Ham, N. D.

    1977-01-01

    The Green's function method was used to study tilting proprotor aircraft aerodynamics with particular application to the problem of the mutual interference of the wing-fuselage-tail-rotor wake configuration. While the formulation is valid for fully unsteady rotor aerodynamics, attention was directed to steady state aerodynamics, which was achieved by replacing the rotor with the actuator disk approximation. The use of an actuator disk analysis introduced a mathematical singularity into the formulation; this problem was studied and resolved. The pressure distribution, lift, and pitching moment were obtained for an XV-15 wing-fuselage-tail rotor configuration at various flight conditions. For the flight configurations explored, the effects of the rotor wake interference on the XV-15 tilt rotor aircraft yielded a reduction in the total lift and an increase in the nose-down pitching moment. This method provides an analytical capability that is simple to apply and can be used to investigate fuselage-tail rotor wake interference as well as to explore other rotor design problem areas.

  16. Advances in Classification Methods for Military Munitions Response

    DTIC Science & Technology

    2010-12-01

    Response Herb Nelson Objective of the Course Provide an update on the sensors , methods, and status of the classification of military munitions...advanced EMI sensors 2Advances in Classification - Introduction Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the...Electromagnetics (EM): Fundamentals and Parameter Extraction Stephen Billings EM Module Outline ● EMI Fundamentals How EMI sensors work and what they measure

  17. Herschel's Interference Demonstration.

    ERIC Educational Resources Information Center

    Perkalskis, Benjamin S.; Freeman, J. Reuben

    2000-01-01

    Describes Herschel's demonstration of interference arising from many coherent rays. Presents a method for students to reproduce this demonstration and obtain beautiful multiple-beam interference patterns. (CCM)

  18. Average Likelihood Methods of Classification of Code Division Multiple Access (CDMA)

    DTIC Science & Technology

    2016-05-01

    case of cognitive radio applications. Modulation classification is part of a broader problem known as blind or uncooperative demodulation the goal of...Introduction 2 2.1 Modulation Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.2 Research Objectives...6 3 Modulation Classification Methods 7 3.0.1 Ad Hoc

  19. Molecular cancer classification using a meta-sample-based regularized robust coding method.

    PubMed

    Wang, Shu-Lin; Sun, Liuchao; Fang, Jianwen

    2014-01-01

    Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods.

  20. Empirical evaluation of data normalization methods for molecular classification

    PubMed Central

    Huang, Huei-Chung

    2018-01-01

    Background Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers—an increasingly important application of microarrays in the era of personalized medicine. Methods In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. Results In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Conclusion Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy. PMID:29666754

  1. REM Sleep Enhancement of Probabilistic Classification Learning is Sensitive to Subsequent Interference

    PubMed Central

    Barsky, Murray M.; Tucker, Matthew A.; Stickgold, Robert

    2015-01-01

    During wakefulness the brain creates meaningful relationships between disparate stimuli in ways that escape conscious awareness. Processes active during sleep can strengthen these relationships, leading to more adaptive use of those stimuli when encountered during subsequent wake. Performance on the weather prediction task (WPT), a well-studied measure of implicit probabilistic learning, has been shown to improve significantly following a night of sleep, with stronger initial learning predicting more nocturnal REM sleep. We investigated this relationship further, studying the effect on WPT performance of a daytime nap containing REM sleep. We also added an interference condition after the nap/wake period as an additional probe of memory strength. Our results show that a nap significantly boosts WPT performance, and that this improvement is correlated with the amount of REM sleep obtained during the nap. When interference training is introduced following the nap, however, this REM-sleep benefit vanishes. In contrast, following an equal period of wake, performance is both unchanged from training and unaffected by interference training. Thus, while the true probabilistic relationships between WPT stimuli are strengthened by sleep, these changes are selectively susceptible to the destructive effects of retroactive interference, at least in the short term. PMID:25769506

  2. Method: automatic segmentation of mitochondria utilizing patch classification, contour pair classification, and automatically seeded level sets

    PubMed Central

    2012-01-01

    Background While progress has been made to develop automatic segmentation techniques for mitochondria, there remains a need for more accurate and robust techniques to delineate mitochondria in serial blockface scanning electron microscopic data. Previously developed texture based methods are limited for solving this problem because texture alone is often not sufficient to identify mitochondria. This paper presents a new three-step method, the Cytoseg process, for automated segmentation of mitochondria contained in 3D electron microscopic volumes generated through serial block face scanning electron microscopic imaging. The method consists of three steps. The first is a random forest patch classification step operating directly on 2D image patches. The second step consists of contour-pair classification. At the final step, we introduce a method to automatically seed a level set operation with output from previous steps. Results We report accuracy of the Cytoseg process on three types of tissue and compare it to a previous method based on Radon-Like Features. At step 1, we show that the patch classifier identifies mitochondria texture but creates many false positive pixels. At step 2, our contour processing step produces contours and then filters them with a second classification step, helping to improve overall accuracy. We show that our final level set operation, which is automatically seeded with output from previous steps, helps to smooth the results. Overall, our results show that use of contour pair classification and level set operations improve segmentation accuracy beyond patch classification alone. We show that the Cytoseg process performs well compared to another modern technique based on Radon-Like Features. Conclusions We demonstrated that texture based methods for mitochondria segmentation can be enhanced with multiple steps that form an image processing pipeline. While we used a random-forest based patch classifier to recognize texture, it would be

  3. Peculiarities of studying an isolated neuron by the method of laser interference microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yusipovich, Alexander I; Kazakova, Tatiana A; Erokhova, Liudmila A

    2006-09-30

    Actual aspects of using a new method of laser interference microscopy (LIM) for studying nerve cells are discussed. The peculiarities of the LIM display of neurons are demonstrated by the example of isolated neurons of a pond snail Lymnaea stagnalis. A comparative analysis of the images of the cell and subcellular structures of a neuron obtained by the methods of interference microscopy, optical transmission microscopy, and confocal microscopy is performed. Various aspects of the application of LIM for studying the lateral dimensions and internal structure of the cytoplasm and organelles of a neuron in cytology and cell physiology are discussed.more » (laser biology)« less

  4. Empirical evaluation of data normalization methods for molecular classification.

    PubMed

    Huang, Huei-Chung; Qin, Li-Xuan

    2018-01-01

    Data artifacts due to variations in experimental handling are ubiquitous in microarray studies, and they can lead to biased and irreproducible findings. A popular approach to correct for such artifacts is through post hoc data adjustment such as data normalization. Statistical methods for data normalization have been developed and evaluated primarily for the discovery of individual molecular biomarkers. Their performance has rarely been studied for the development of multi-marker molecular classifiers-an increasingly important application of microarrays in the era of personalized medicine. In this study, we set out to evaluate the performance of three commonly used methods for data normalization in the context of molecular classification, using extensive simulations based on re-sampling from a unique pair of microRNA microarray datasets for the same set of samples. The data and code for our simulations are freely available as R packages at GitHub. In the presence of confounding handling effects, all three normalization methods tended to improve the accuracy of the classifier when evaluated in an independent test data. The level of improvement and the relative performance among the normalization methods depended on the relative level of molecular signal, the distributional pattern of handling effects (e.g., location shift vs scale change), and the statistical method used for building the classifier. In addition, cross-validation was associated with biased estimation of classification accuracy in the over-optimistic direction for all three normalization methods. Normalization may improve the accuracy of molecular classification for data with confounding handling effects; however, it cannot circumvent the over-optimistic findings associated with cross-validation for assessing classification accuracy.

  5. Objected-oriented remote sensing image classification method based on geographic ontology model

    NASA Astrophysics Data System (ADS)

    Chu, Z.; Liu, Z. J.; Gu, H. Y.

    2016-11-01

    Nowadays, with the development of high resolution remote sensing image and the wide application of laser point cloud data, proceeding objected-oriented remote sensing classification based on the characteristic knowledge of multi-source spatial data has been an important trend on the field of remote sensing image classification, which gradually replaced the traditional method through improving algorithm to optimize image classification results. For this purpose, the paper puts forward a remote sensing image classification method that uses the he characteristic knowledge of multi-source spatial data to build the geographic ontology semantic network model, and carries out the objected-oriented classification experiment to implement urban features classification, the experiment uses protégé software which is developed by Stanford University in the United States, and intelligent image analysis software—eCognition software as the experiment platform, uses hyperspectral image and Lidar data that is obtained through flight in DaFeng City of JiangSu as the main data source, first of all, the experiment uses hyperspectral image to obtain feature knowledge of remote sensing image and related special index, the second, the experiment uses Lidar data to generate nDSM(Normalized DSM, Normalized Digital Surface Model),obtaining elevation information, the last, the experiment bases image feature knowledge, special index and elevation information to build the geographic ontology semantic network model that implement urban features classification, the experiment results show that, this method is significantly higher than the traditional classification algorithm on classification accuracy, especially it performs more evidently on the respect of building classification. The method not only considers the advantage of multi-source spatial data, for example, remote sensing image, Lidar data and so on, but also realizes multi-source spatial data knowledge integration and application

  6. A multiple-point spatially weighted k-NN method for object-based classification

    NASA Astrophysics Data System (ADS)

    Tang, Yunwei; Jing, Linhai; Li, Hui; Atkinson, Peter M.

    2016-10-01

    Object-based classification, commonly referred to as object-based image analysis (OBIA), is now commonly regarded as able to produce more appealing classification maps, often of greater accuracy, than pixel-based classification and its application is now widespread. Therefore, improvement of OBIA using spatial techniques is of great interest. In this paper, multiple-point statistics (MPS) is proposed for object-based classification enhancement in the form of a new multiple-point k-nearest neighbour (k-NN) classification method (MPk-NN). The proposed method first utilises a training image derived from a pre-classified map to characterise the spatial correlation between multiple points of land cover classes. The MPS borrows spatial structures from other parts of the training image, and then incorporates this spatial information, in the form of multiple-point probabilities, into the k-NN classifier. Two satellite sensor images with a fine spatial resolution were selected to evaluate the new method. One is an IKONOS image of the Beijing urban area and the other is a WorldView-2 image of the Wolong mountainous area, in China. The images were object-based classified using the MPk-NN method and several alternatives, including the k-NN, the geostatistically weighted k-NN, the Bayesian method, the decision tree classifier (DTC), and the support vector machine classifier (SVM). It was demonstrated that the new spatial weighting based on MPS can achieve greater classification accuracy relative to the alternatives and it is, thus, recommended as appropriate for object-based classification.

  7. Couple Graph Based Label Propagation Method for Hyperspectral Remote Sensing Data Classification

    NASA Astrophysics Data System (ADS)

    Wang, X. P.; Hu, Y.; Chen, J.

    2018-04-01

    Graph based semi-supervised classification method are widely used for hyperspectral image classification. We present a couple graph based label propagation method, which contains both the adjacency graph and the similar graph. We propose to construct the similar graph by using the similar probability, which utilize the label similarity among examples probably. The adjacency graph was utilized by a common manifold learning method, which has effective improve the classification accuracy of hyperspectral data. The experiments indicate that the couple graph Laplacian which unite both the adjacency graph and the similar graph, produce superior classification results than other manifold Learning based graph Laplacian and Sparse representation based graph Laplacian in label propagation framework.

  8. Classical reconstruction of interference patterns of position-wave-vector-entangled photon pairs by the time-reversal method

    NASA Astrophysics Data System (ADS)

    Ogawa, Kazuhisa; Kobayashi, Hirokazu; Tomita, Akihisa

    2018-02-01

    The quantum interference of entangled photons forms a key phenomenon underlying various quantum-optical technologies. It is known that the quantum interference patterns of entangled photon pairs can be reconstructed classically by the time-reversal method; however, the time-reversal method has been applied only to time-frequency-entangled two-photon systems in previous experiments. Here, we apply the time-reversal method to the position-wave-vector-entangled two-photon systems: the two-photon Young interferometer and the two-photon beam focusing system. We experimentally demonstrate that the time-reversed systems classically reconstruct the same interference patterns as the position-wave-vector-entangled two-photon systems.

  9. Multipath interference test method for distributed amplifiers

    NASA Astrophysics Data System (ADS)

    Okada, Takahiro; Aida, Kazuo

    2005-12-01

    A method for testing distributed amplifiers is presented; the multipath interference (MPI) is detected as a beat spectrum between the multipath signal and the direct signal using a binary frequency shifted keying (FSK) test signal. The lightwave source is composed of a DFB-LD that is directly modulated by a pulse stream passing through an equalizer, and emits the FSK signal of the frequency deviation of about 430MHz at repetition rate of 80-100 kHz. The receiver consists of a photo-diode and an electrical spectrum analyzer (ESA). The base-band power spectrum peak appeared at the frequency of the FSK frequency deviation can be converted to amount of MPI using a calibration chart. The test method has improved the minimum detectable MPI as low as -70 dB, compared to that of -50 dB of the conventional test method. The detailed design and performance of the proposed method are discussed, including the MPI simulator for calibration procedure, computer simulations for evaluating the error caused by the FSK repetition rate and the fiber length under test and experiments on singlemode fibers and distributed Raman amplifier.

  10. Classification of Children Intelligence with Fuzzy Logic Method

    NASA Astrophysics Data System (ADS)

    Syahminan; ika Hidayati, Permata

    2018-04-01

    Intelligence of children s An Important Thing To Know The Parents Early on. Typing Can be done With a Child’s intelligence Grouping Dominant Characteristics Of each Type of Intelligence. To Make it easier for Parents in Determining The type of Children’s intelligence And How to Overcome them, for It Created A Classification System Intelligence Grouping Children By Using Fuzzy logic method For determination Of a Child’s degree of intelligence type. From the analysis We concluded that The presence of Intelligence Classification systems Pendulum Children With Fuzzy Logic Method Of determining The type of The Child’s intelligence Can be Done in a way That is easier And The results More accurate Conclusions Than Manual tests.

  11. Numerical simulation of transonic compressor under circumferential inlet distortion and rotor/stator interference using harmonic balance method

    NASA Astrophysics Data System (ADS)

    Wang, Ziwei; Jiang, Xiong; Chen, Ti; Hao, Yan; Qiu, Min

    2018-05-01

    Simulating the unsteady flow of compressor under circumferential inlet distortion and rotor/stator interference would need full-annulus grid with a dual time method. This process is time consuming and needs a large amount of computational resources. Harmonic balance method simulates the unsteady flow in compressor on single passage grid with a series of steady simulations. This will largely increase the computational efficiency in comparison with the dual time method. However, most simulations with harmonic balance method are conducted on the flow under either circumferential inlet distortion or rotor/stator interference. Based on an in-house CFD code, the harmonic balance method is applied in the simulation of flow in the NASA Stage 35 under both circumferential inlet distortion and rotor/stator interference. As the unsteady flow is influenced by two different unsteady disturbances, it leads to the computational instability. The instability can be avoided by coupling the harmonic balance method with an optimizing algorithm. The computational result of harmonic balance method is compared with the result of full-annulus simulation. It denotes that, the harmonic balance method simulates the flow under circumferential inlet distortion and rotor/stator interference as precise as the full-annulus simulation with a speed-up of about 8 times.

  12. Virtual Sensor of Surface Electromyography in a New Extensive Fault-Tolerant Classification System

    PubMed Central

    Balbinot, Alexandre

    2018-01-01

    A few prosthetic control systems in the scientific literature obtain pattern recognition algorithms adapted to changes that occur in the myoelectric signal over time and, frequently, such systems are not natural and intuitive. These are some of the several challenges for myoelectric prostheses for everyday use. The concept of the virtual sensor, which has as its fundamental objective to estimate unavailable measures based on other available measures, is being used in other fields of research. The virtual sensor technique applied to surface electromyography can help to minimize these problems, typically related to the degradation of the myoelectric signal that usually leads to a decrease in the classification accuracy of the movements characterized by computational intelligent systems. This paper presents a virtual sensor in a new extensive fault-tolerant classification system to maintain the classification accuracy after the occurrence of the following contaminants: ECG interference, electrode displacement, movement artifacts, power line interference, and saturation. The Time-Varying Autoregressive Moving Average (TVARMA) and Time-Varying Kalman filter (TVK) models are compared to define the most robust model for the virtual sensor. Results of movement classification were presented comparing the usual classification techniques with the method of the degraded signal replacement and classifier retraining. The experimental results were evaluated for these five noise types in 16 surface electromyography (sEMG) channel degradation case studies. The proposed system without using classifier retraining techniques recovered of mean classification accuracy was of 4% to 38% for electrode displacement, movement artifacts, and saturation noise. The best mean classification considering all signal contaminants and channel combinations evaluated was the classification using the retraining method, replacing the degraded channel by the virtual sensor TVARMA model. This method

  13. An unbalanced spectra classification method based on entropy

    NASA Astrophysics Data System (ADS)

    Liu, Zhong-bao; Zhao, Wen-juan

    2017-05-01

    How to solve the problem of distinguishing the minority spectra from the majority of the spectra is quite important in astronomy. In view of this, an unbalanced spectra classification method based on entropy (USCM) is proposed in this paper to deal with the unbalanced spectra classification problem. USCM greatly improves the performances of the traditional classifiers on distinguishing the minority spectra as it takes the data distribution into consideration in the process of classification. However, its time complexity is exponential with the training size, and therefore, it can only deal with the problem of small- and medium-scale classification. How to solve the large-scale classification problem is quite important to USCM. It can be easily obtained by mathematical computation that the dual form of USCM is equivalent to the minimum enclosing ball (MEB), and core vector machine (CVM) is introduced, USCM based on CVM is proposed to deal with the large-scale classification problem. Several comparative experiments on the 4 subclasses of K-type spectra, 3 subclasses of F-type spectra and 3 subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS) verify USCM and USCM based on CVM perform better than kNN (k nearest neighbor) and SVM (support vector machine) in dealing with the problem of rare spectra mining respectively on the small- and medium-scale datasets and the large-scale datasets.

  14. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System.

    PubMed

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-10-20

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.

  15. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System

    PubMed Central

    Li, Hongqiang; Yuan, Danyang; Wang, Youxi; Cui, Dianyin; Cao, Lu

    2016-01-01

    Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG) recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias. PMID:27775596

  16. Neural network approaches versus statistical methods in classification of multisource remote sensing data

    NASA Technical Reports Server (NTRS)

    Benediktsson, Jon A.; Swain, Philip H.; Ersoy, Okan K.

    1990-01-01

    Neural network learning procedures and statistical classificaiton methods are applied and compared empirically in classification of multisource remote sensing and geographic data. Statistical multisource classification by means of a method based on Bayesian classification theory is also investigated and modified. The modifications permit control of the influence of the data sources involved in the classification process. Reliability measures are introduced to rank the quality of the data sources. The data sources are then weighted according to these rankings in the statistical multisource classification. Four data sources are used in experiments: Landsat MSS data and three forms of topographic data (elevation, slope, and aspect). Experimental results show that two different approaches have unique advantages and disadvantages in this classification application.

  17. Classification of Liss IV Imagery Using Decision Tree Methods

    NASA Astrophysics Data System (ADS)

    Verma, Amit Kumar; Garg, P. K.; Prasad, K. S. Hari; Dadhwal, V. K.

    2016-06-01

    Image classification is a compulsory step in any remote sensing research. Classification uses the spectral information represented by the digital numbers in one or more spectral bands and attempts to classify each individual pixel based on this spectral information. Crop classification is the main concern of remote sensing applications for developing sustainable agriculture system. Vegetation indices computed from satellite images gives a good indication of the presence of vegetation. It is an indicator that describes the greenness, density and health of vegetation. Texture is also an important characteristics which is used to identifying objects or region of interest is an image. This paper illustrate the use of decision tree method to classify the land in to crop land and non-crop land and to classify different crops. In this paper we evaluate the possibility of crop classification using an integrated approach methods based on texture property with different vegetation indices for single date LISS IV sensor 5.8 meter high spatial resolution data. Eleven vegetation indices (NDVI, DVI, GEMI, GNDVI, MSAVI2, NDWI, NG, NR, NNIR, OSAVI and VI green) has been generated using green, red and NIR band and then image is classified using decision tree method. The other approach is used integration of texture feature (mean, variance, kurtosis and skewness) with these vegetation indices. A comparison has been done between these two methods. The results indicate that inclusion of textural feature with vegetation indices can be effectively implemented to produce classifiedmaps with 8.33% higher accuracy for Indian satellite IRS-P6, LISS IV sensor images.

  18. Proactive AP Selection Method Considering the Radio Interference Environment

    NASA Astrophysics Data System (ADS)

    Taenaka, Yuzo; Kashihara, Shigeru; Tsukamoto, Kazuya; Yamaguchi, Suguru; Oie, Yuji

    In the near future, wireless local area networks (WLANs) will overlap to provide continuous coverage over a wide area. In such ubiquitous WLANs, a mobile node (MN) moving freely between multiple access points (APs) requires not only permanent access to the Internet but also continuous communication quality during handover. In order to satisfy these requirements, an MN needs to (1) select an AP with better performance and (2) execute a handover seamlessly. To satisfy requirement (2), we proposed a seamless handover method in a previous study. Moreover, in order to achieve (1), the Received Signal Strength Indicator (RSSI) is usually employed to measure wireless link quality in a WLAN system. However, in a real environment, especially if APs are densely situated, it is difficult to always select an AP with better performance based on only the RSSI. This is because the RSSI alone cannot detect the degradation of communication quality due to radio interference. Moreover, it is important that AP selection is completed only on an MN, because we can assume that, in ubiquitous WLANs, various organizations or operators will manage APs. Hence, we cannot modify the APs for AP selection. To overcome these difficulties, in the present paper, we propose and implement a proactive AP selection method considering wireless link condition based on the number of frame retransmissions in addition to the RSSI. In the evaluation, we show that the proposed AP selection method can appropriately select an AP with good wireless link quality, i.e., high RSSI and low radio interference.

  19. Statistical methods and neural network approaches for classification of data from multiple sources

    NASA Technical Reports Server (NTRS)

    Benediktsson, Jon Atli; Swain, Philip H.

    1990-01-01

    Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.

  20. Integrating the ECG power-line interference removal methods with rule-based system.

    PubMed

    Kumaravel, N; Senthil, A; Sridhar, K S; Nithiyanandam, N

    1995-01-01

    The power-line frequency interference in electrocardiographic signals is eliminated to enhance the signal characteristics for diagnosis. The power-line frequency normally varies +/- 1.5 Hz from its standard value of 50 Hz. In the present work, the performances of the linear FIR filter, Wave digital filter (WDF) and adaptive filter for the power-line frequency variations from 48.5 to 51.5 Hz in steps of 0.5 Hz are studied. The advantage of the LMS adaptive filter in the removal of power-line frequency interference even if the frequency of interference varies by +/- 1.5 Hz from its normal value of 50 Hz over other fixed frequency filters is very well justified. A novel method of integrating rule-based system approach with linear FIR filter and also with Wave digital filter are proposed. The performances of Rule-based FIR filter and Rule-based Wave digital filter are compared with the LMS adaptive filter.

  1. A new interferential multispectral image compression algorithm based on adaptive classification and curve-fitting

    NASA Astrophysics Data System (ADS)

    Wang, Ke-Yan; Li, Yun-Song; Liu, Kai; Wu, Cheng-Ke

    2008-08-01

    A novel compression algorithm for interferential multispectral images based on adaptive classification and curve-fitting is proposed. The image is first partitioned adaptively into major-interference region and minor-interference region. Different approximating functions are then constructed for two kinds of regions respectively. For the major interference region, some typical interferential curves are selected to predict other curves. These typical curves are then processed by curve-fitting method. For the minor interference region, the data of each interferential curve are independently approximated. Finally the approximating errors of two regions are entropy coded. The experimental results show that, compared with JPEG2000, the proposed algorithm not only decreases the average output bit-rate by about 0.2 bit/pixel for lossless compression, but also improves the reconstructed images and reduces the spectral distortion greatly, especially at high bit-rate for lossy compression.

  2. A new hierarchical method for inter-patient heartbeat classification using random projections and RR intervals

    PubMed Central

    2014-01-01

    Background The inter-patient classification schema and the Association for the Advancement of Medical Instrumentation (AAMI) standards are important to the construction and evaluation of automated heartbeat classification systems. The majority of previously proposed methods that take the above two aspects into consideration use the same features and classification method to classify different classes of heartbeats. The performance of the classification system is often unsatisfactory with respect to the ventricular ectopic beat (VEB) and supraventricular ectopic beat (SVEB). Methods Based on the different characteristics of VEB and SVEB, a novel hierarchical heartbeat classification system was constructed. This was done in order to improve the classification performance of these two classes of heartbeats by using different features and classification methods. First, random projection and support vector machine (SVM) ensemble were used to detect VEB. Then, the ratio of the RR interval was compared to a predetermined threshold to detect SVEB. The optimal parameters for the classification models were selected on the training set and used in the independent testing set to assess the final performance of the classification system. Meanwhile, the effect of different lead configurations on the classification results was evaluated. Results Results showed that the performance of this classification system was notably superior to that of other methods. The VEB detection sensitivity was 93.9% with a positive predictive value of 90.9%, and the SVEB detection sensitivity was 91.1% with a positive predictive value of 42.2%. In addition, this classification process was relatively fast. Conclusions A hierarchical heartbeat classification system was proposed based on the inter-patient data division to detect VEB and SVEB. It demonstrated better classification performance than existing methods. It can be regarded as a promising system for detecting VEB and SVEB of unknown patients in

  3. New wideband radar target classification method based on neural learning and modified Euclidean metric

    NASA Astrophysics Data System (ADS)

    Jiang, Yicheng; Cheng, Ping; Ou, Yangkui

    2001-09-01

    A new method for target classification of high-range resolution radar is proposed. It tries to use neural learning to obtain invariant subclass features of training range profiles. A modified Euclidean metric based on the Box-Cox transformation technique is investigated for Nearest Neighbor target classification improvement. The classification experiments using real radar data of three different aircraft have demonstrated that classification error can reduce 8% if this method proposed in this paper is chosen instead of the conventional method. The results of this paper have shown that by choosing an optimized metric, it is indeed possible to reduce the classification error without increasing the number of samples.

  4. A novel design measuring method based on linearly polarized laser interference

    NASA Astrophysics Data System (ADS)

    Cao, Yanbo; Ai, Hua; Zhao, Nan

    2013-09-01

    The interferometric method is widely used in the precision measurement, including the surface quality of the large-aperture mirror. The laser interference technology has been developing rapidly as the laser sources become more and more mature and reliable. We adopted the laser diode as the source for the sake of the short coherent wavelength of it for the optical path difference of the system is quite shorter as several wavelengths, and the power of laser diode is sufficient for measurement and safe to human eye. The 673nm linearly laser was selected and we construct a novel form of interferometric system as we called `Closed Loop', comprised of polarizing optical components, such as polarizing prism and quartz wave plate, the light from the source split by which into measuring beam and referencing beam, they've both reflected by the measuring mirror, after the two beams transforming into circular polarization and spinning in the opposite directions we induced the polarized light synchronous phase shift interference technology to get the detecting fringes, which transfers the phase shifting in time domain to space, so that we did not need to consider the precise-controlled shift of optical path difference, which will introduce the disturbance of the air current and vibration. We got the interference fringes from four different CCD cameras well-alignment, and the fringes are shifted into four different phases of 0, π/2, π, and 3π/2 in time. After obtaining the images from the CCD cameras, we need to align the interference fringes pixel to pixel from different CCD cameras, and synthesis the rough morphology, after getting rid of systematic error, we could calculate the surface accuracy of the measuring mirror. This novel design detecting method could be applied into measuring the optical system aberration, and it would develop into the setup of the portable structural interferometer and widely used in different measuring circumstances.

  5. An AIS-Based E-mail Classification Method

    NASA Astrophysics Data System (ADS)

    Qing, Jinjian; Mao, Ruilong; Bie, Rongfang; Gao, Xiao-Zhi

    This paper proposes a new e-mail classification method based on the Artificial Immune System (AIS), which is endowed with good diversity and self-adaptive ability by using the immune learning, immune memory, and immune recognition. In our method, the features of spam and non-spam extracted from the training sets are combined together, and the number of false positives (non-spam messages that are incorrectly classified as spam) can be reduced. The experimental results demonstrate that this method is effective in reducing the false rate.

  6. Physical activity classification with dynamic discriminative methods.

    PubMed

    Ray, Evan L; Sasaki, Jeffer E; Freedson, Patty S; Staudenmayer, John

    2018-06-19

    A person's physical activity has important health implications, so it is important to be able to measure aspects of physical activity objectively. One approach to doing that is to use data from an accelerometer to classify physical activity according to activity type (e.g., lying down, sitting, standing, or walking) or intensity (e.g., sedentary, light, moderate, or vigorous). This can be formulated as a labeled classification problem, where the model relates a feature vector summarizing the accelerometer signal in a window of time to the activity type or intensity in that window. These data exhibit two key characteristics: (1) the activity classes in different time windows are not independent, and (2) the accelerometer features have moderately high dimension and follow complex distributions. Through a simulation study and applications to three datasets, we demonstrate that a model's classification performance is related to how it addresses these aspects of the data. Dynamic methods that account for temporal dependence achieve better performance than static methods that do not. Generative methods that explicitly model the distribution of the accelerometer signal features do not perform as well as methods that take a discriminative approach to establishing the relationship between the accelerometer signal and the activity class. Specifically, Conditional Random Fields consistently have better performance than commonly employed methods that ignore temporal dependence or attempt to model the accelerometer features. © 2018, The International Biometric Society.

  7. Resolving Semantic Interference during Word Production Requires Central Attention

    ERIC Educational Resources Information Center

    Kleinman, Daniel

    2013-01-01

    The semantic picture-word interference task has been used to diagnose how speakers resolve competition while selecting words for production. The attentional demands of this resolution process were assessed in 2 dual-task experiments (tone classification followed by picture naming). In Experiment 1, when pictures and distractor words were presented…

  8. LASER BIOLOGY: Peculiarities of studying an isolated neuron by the method of laser interference microscopy

    NASA Astrophysics Data System (ADS)

    Yusipovich, Alexander I.; Novikov, Sergey M.; Kazakova, Tatiana A.; Erokhova, Liudmila A.; Brazhe, Nadezda A.; Lazarev, Grigory L.; Maksimov, Georgy V.

    2006-09-01

    Actual aspects of using a new method of laser interference microscopy (LIM) for studying nerve cells are discussed. The peculiarities of the LIM display of neurons are demonstrated by the example of isolated neurons of a pond snail Lymnaea stagnalis. A comparative analysis of the images of the cell and subcellular structures of a neuron obtained by the methods of interference microscopy, optical transmission microscopy, and confocal microscopy is performed. Various aspects of the application of LIM for studying the lateral dimensions and internal structure of the cytoplasm and organelles of a neuron in cytology and cell physiology are discussed.

  9. A Two-Layer Method for Sedentary Behaviors Classification Using Smartphone and Bluetooth Beacons.

    PubMed

    Cerón, Jesús D; López, Diego M; Hofmann, Christian

    2017-01-01

    Among the factors that outline the health of populations, person's lifestyle is the more important one. This work focuses on the caracterization and prevention of sedentary lifestyles. A sedentary behavior is defined as "any waking behavior characterized by an energy expenditure of 1.5 METs (Metabolic Equivalent) or less while in a sitting or reclining posture". To propose a method for sedentary behaviors classification using a smartphone and Bluetooth beacons considering different types of classification models: personal, hybrid or impersonal. Following the CRISP-DM methodology, a method based on a two-layer approach for the classification of sedentary behaviors is proposed. Using data collected from a smartphones' accelerometer, gyroscope and barometer; the first layer classifies between performing a sedentary behavior and not. The second layer of the method classifies the specific sedentary activity performed using only the smartphone's accelerometer and barometer data, but adding indoor location data, using Bluetooth Low Energy (BLE) beacons. To improve the precision of the classification, both layers implemented the Random Forest algorithm and the personal model. This study presents the first available method for the automatic classification of specific sedentary behaviors. The layered classification approach has the potential to improve processing, memory and energy consumption of mobile devices and wearables used.

  10. A Real-Time Infrared Ultra-Spectral Signature Classification Method via Spatial Pyramid Matching

    PubMed Central

    Mei, Xiaoguang; Ma, Yong; Li, Chang; Fan, Fan; Huang, Jun; Ma, Jiayi

    2015-01-01

    The state-of-the-art ultra-spectral sensor technology brings new hope for high precision applications due to its high spectral resolution. However, it also comes with new challenges, such as the high data dimension and noise problems. In this paper, we propose a real-time method for infrared ultra-spectral signature classification via spatial pyramid matching (SPM), which includes two aspects. First, we introduce an infrared ultra-spectral signature similarity measure method via SPM, which is the foundation of the matching-based classification method. Second, we propose the classification method with reference spectral libraries, which utilizes the SPM-based similarity for the real-time infrared ultra-spectral signature classification with robustness performance. Specifically, instead of matching with each spectrum in the spectral library, our method is based on feature matching, which includes a feature library-generating phase. We calculate the SPM-based similarity between the feature of the spectrum and that of each spectrum of the reference feature library, then take the class index of the corresponding spectrum having the maximum similarity as the final result. Experimental comparisons on two publicly-available datasets demonstrate that the proposed method effectively improves the real-time classification performance and robustness to noise. PMID:26205263

  11. Millimetre Level Accuracy GNSS Positioning with the Blind Adaptive Beamforming Method in Interference Environments.

    PubMed

    Daneshmand, Saeed; Marathe, Thyagaraja; Lachapelle, Gérard

    2016-10-31

    The use of antenna arrays in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its superior capability to suppress both narrowband and wideband interference. However, the phase distortions resulting from array processing may limit the applicability of these methods for high precision applications using carrier phase based positioning techniques. This paper studies the phase distortions occurring with the adaptive blind beamforming method in which satellite angle of arrival (AoA) information is not employed in the optimization problem. To cater to non-stationary interference scenarios, the array weights of the adaptive beamformer are continuously updated. The effects of these continuous updates on the tracking parameters of a GNSS receiver are analyzed. The second part of this paper focuses on reducing the phase distortions during the blind beamforming process in order to allow the receiver to perform carrier phase based positioning by applying a constraint on the structure of the array configuration and by compensating the array uncertainties. Limitations of the previous methods are studied and a new method is proposed that keeps the simplicity of the blind beamformer structure and, at the same time, reduces tracking degradations while achieving millimetre level positioning accuracy in interference environments. To verify the applicability of the proposed method and analyze the degradations, array signals corresponding to the GPS L1 band are generated using a combination of hardware and software simulators. Furthermore, the amount of degradation and performance of the proposed method under different conditions are evaluated based on Monte Carlo simulations.

  12. Evaluation of air quality zone classification methods based on ambient air concentration exposure.

    PubMed

    Freeman, Brian; McBean, Ed; Gharabaghi, Bahram; Thé, Jesse

    2017-05-01

    Air quality zones are used by regulatory authorities to implement ambient air standards in order to protect human health. Air quality measurements at discrete air monitoring stations are critical tools to determine whether an air quality zone complies with local air quality standards or is noncompliant. This study presents a novel approach for evaluation of air quality zone classification methods by breaking the concentration distribution of a pollutant measured at an air monitoring station into compliance and exceedance probability density functions (PDFs) and then using Monte Carlo analysis with the Central Limit Theorem to estimate long-term exposure. The purpose of this paper is to compare the risk associated with selecting one ambient air classification approach over another by testing the possible exposure an individual living within a zone may face. The chronic daily intake (CDI) is utilized to compare different pollutant exposures over the classification duration of 3 years between two classification methods. Historical data collected from air monitoring stations in Kuwait are used to build representative models of 1-hr NO 2 and 8-hr O 3 within a zone that meets the compliance requirements of each method. The first method, the "3 Strike" method, is a conservative approach based on a winner-take-all approach common with most compliance classification methods, while the second, the 99% Rule method, allows for more robust analyses and incorporates long-term trends. A Monte Carlo analysis is used to model the CDI for each pollutant and each method with the zone at a single station and with multiple stations. The model assumes that the zone is already in compliance with air quality standards over the 3 years under the different classification methodologies. The model shows that while the CDI of the two methods differs by 2.7% over the exposure period for the single station case, the large number of samples taken over the duration period impacts the sensitivity

  13. New KF-PP-SVM classification method for EEG in brain-computer interfaces.

    PubMed

    Yang, Banghua; Han, Zhijun; Zan, Peng; Wang, Qian

    2014-01-01

    Classification methods are a crucial direction in the current study of brain-computer interfaces (BCIs). To improve the classification accuracy for electroencephalogram (EEG) signals, a novel KF-PP-SVM (kernel fisher, posterior probability, and support vector machine) classification method is developed. Its detailed process entails the use of common spatial patterns to obtain features, based on which the within-class scatter is calculated. Then the scatter is added into the kernel function of a radial basis function to construct a new kernel function. This new kernel is integrated into the SVM to obtain a new classification model. Finally, the output of SVM is calculated based on posterior probability and the final recognition result is obtained. To evaluate the effectiveness of the proposed KF-PP-SVM method, EEG data collected from laboratory are processed with four different classification schemes (KF-PP-SVM, KF-SVM, PP-SVM, and SVM). The results showed that the overall average improvements arising from the use of the KF-PP-SVM scheme as opposed to KF-SVM, PP-SVM and SVM schemes are 2.49%, 5.83 % and 6.49 % respectively.

  14. A novel fruit shape classification method based on multi-scale analysis

    NASA Astrophysics Data System (ADS)

    Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin

    2005-11-01

    Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.

  15. Impact of missing data imputation methods on gene expression clustering and classification.

    PubMed

    de Souto, Marcilio C P; Jaskowiak, Pablo A; Costa, Ivan G

    2015-02-26

    Several missing value imputation methods for gene expression data have been proposed in the literature. In the past few years, researchers have been putting a great deal of effort into presenting systematic evaluations of the different imputation algorithms. Initially, most algorithms were assessed with an emphasis on the accuracy of the imputation, using metrics such as the root mean squared error. However, it has become clear that the success of the estimation of the expression value should be evaluated in more practical terms as well. One can consider, for example, the ability of the method to preserve the significant genes in the dataset, or its discriminative/predictive power for classification/clustering purposes. We performed a broad analysis of the impact of five well-known missing value imputation methods on three clustering and four classification methods, in the context of 12 cancer gene expression datasets. We employed a statistical framework, for the first time in this field, to assess whether different imputation methods improve the performance of the clustering/classification methods. Our results suggest that the imputation methods evaluated have a minor impact on the classification and downstream clustering analyses. Simple methods such as replacing the missing values by mean or the median values performed as well as more complex strategies. The datasets analyzed in this study are available at http://costalab.org/Imputation/ .

  16. Calculation of two dimensional vortex/surface interference using panel methods

    NASA Technical Reports Server (NTRS)

    Maskew, B.

    1980-01-01

    The application of panel methods to the calculation of vortex/surface interference characteristics in two dimensional flow was studied over a range of situations starting with the simple case of a vortex above a plane and proceeding to the case of vortex separation from a prescribed point on a thick section. Low order and high order panel methods were examined, but the main factor influencing the accuracy of the solution was the distance between control stations in relation to the height of the vortex above the surface. Improvements over the basic solutions were demonstrated using a technique based on subpanels and an applied doublet distribution.

  17. Comparison of transect sampling and object-oriented image classification methods of urbanizing catchments

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Tenenbaum, D. E.

    2009-12-01

    The process of urbanization has major effects on both human and natural systems. In order to monitor these changes and better understand how urban ecological systems work, urban spatial structure and the variation needs to be first quantified at a fine scale. Because the land-use and land-cover (LULC) in urbanizing areas is highly heterogeneous, the classification of urbanizing environments is the most challenging field in remote sensing. Although a pixel-based method is a common way to do classification, the results are not good enough for many research objectives which require more accurate classification data in fine scales. Transect sampling and object-oriented classification methods are more appropriate for urbanizing areas. Tenenbaum used a transect sampling method using a computer-based facility within a widely available commercial GIS in the Glyndon Catchment and the Upper Baismans Run Catchment, Baltimore, Maryland. It was a two-tiered classification system, including a primary level (which includes 7 classes) and a secondary level (which includes 37 categories). The statistical information of LULC was collected. W. Zhou applied an object-oriented method at the parcel level in Gwynn’s Falls Watershed which includes the two previously mentioned catchments and six classes were extracted. The two urbanizing catchments are located in greater Baltimore, Maryland and drain into Chesapeake Bay. In this research, the two different methods are compared for 6 classes (woody, herbaceous, water, ground, pavement and structure). The comparison method uses the segments in the transect method to extract LULC information from the results of the object-oriented method. Classification results were compared in order to evaluate the difference between the two methods. The overall proportions of LULC classes from the two studies show that there is overestimation of structures in the object-oriented method. For the other five classes, the results from the two methods are

  18. A minimum spanning forest based classification method for dedicated breast CT images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pike, Robert; Sechopoulos, Ioannis; Fei, Baowei, E-mail: bfei@emory.edu

    Purpose: To develop and test an automated algorithm to classify different types of tissue in dedicated breast CT images. Methods: Images of a single breast of five different patients were acquired with a dedicated breast CT clinical prototype. The breast CT images were processed by a multiscale bilateral filter to reduce noise while keeping edge information and were corrected to overcome cupping artifacts. As skin and glandular tissue have similar CT values on breast CT images, morphologic processing is used to identify the skin based on its position information. A support vector machine (SVM) is trained and the resulting modelmore » used to create a pixelwise classification map of fat and glandular tissue. By combining the results of the skin mask with the SVM results, the breast tissue is classified as skin, fat, and glandular tissue. This map is then used to identify markers for a minimum spanning forest that is grown to segment the image using spatial and intensity information. To evaluate the authors’ classification method, they use DICE overlap ratios to compare the results of the automated classification to those obtained by manual segmentation on five patient images. Results: Comparison between the automatic and the manual segmentation shows that the minimum spanning forest based classification method was able to successfully classify dedicated breast CT image with average DICE ratios of 96.9%, 89.8%, and 89.5% for fat, glandular, and skin tissue, respectively. Conclusions: A 2D minimum spanning forest based classification method was proposed and evaluated for classifying the fat, skin, and glandular tissue in dedicated breast CT images. The classification method can be used for dense breast tissue quantification, radiation dose assessment, and other applications in breast imaging.« less

  19. A Method of Spatial Mapping and Reclassification for High-Spatial-Resolution Remote Sensing Image Classification

    PubMed Central

    Wang, Guizhou; Liu, Jianbo; He, Guojin

    2013-01-01

    This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine). Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy. PMID:24453808

  20. Research on Remote Sensing Geological Information Extraction Based on Object Oriented Classification

    NASA Astrophysics Data System (ADS)

    Gao, Hui

    2018-04-01

    The northern Tibet belongs to the Sub cold arid climate zone in the plateau. It is rarely visited by people. The geological working conditions are very poor. However, the stratum exposures are good and human interference is very small. Therefore, the research on the automatic classification and extraction of remote sensing geological information has typical significance and good application prospect. Based on the object-oriented classification in Northern Tibet, using the Worldview2 high-resolution remote sensing data, combined with the tectonic information and image enhancement, the lithological spectral features, shape features, spatial locations and topological relations of various geological information are excavated. By setting the threshold, based on the hierarchical classification, eight kinds of geological information were classified and extracted. Compared with the existing geological maps, the accuracy analysis shows that the overall accuracy reached 87.8561 %, indicating that the classification-oriented method is effective and feasible for this study area and provides a new idea for the automatic extraction of remote sensing geological information.

  1. Methods for assessing wall interference in the 2- by 2-foot adaptive-wall wind tunnel

    NASA Technical Reports Server (NTRS)

    Schairer, E. T.

    1986-01-01

    Discussed are two methods for assessing two-dimensional wall interference in the adaptive-wall test section of the NASA Ames 2 x 2-Foot Transonic Wind Tunnel: (1) a method for predicting free-air conditions near the walls of the test section (adaptive-wall methods); and (2) a method for estimating wall-induced velocities near the model (correction methods), both of which methods are based on measurements of either one or two components of flow velocity near the walls of the test section. Each method is demonstrated using simulated wind tunnel data and is compared with other methods of the same type. The two-component adaptive-wall and correction methods were found to be preferable to the corresponding one-component methods because: (1) they are more sensitive to, and give a more complete description of, wall interference; (2) they require measurements at fewer locations; (3) they can be used to establish free-stream conditions; and (4) they are independent of a description of the model and constants of integration.

  2. Toward optimal feature and time segment selection by divergence method for EEG signals classification.

    PubMed

    Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing

    2018-06-01

    Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. A method of assigning socio-economic status classification to British Armed Forces personnel.

    PubMed

    Yoong, S Y; Miles, D; McKinney, P A; Smith, I J; Spencer, N J

    1999-10-01

    The objective of this paper was to develop and evaluate a socio-economic status classification method for British Armed Forces personnel. Two study groups comprising of civilian and Armed Forces families were identified from livebirths delivered between 1 January-30 June 1996 within the Northallerton Health district which includes Catterick Garrison and RAF Leeming. The participants were the parents of babies delivered at a District General Hospital, comprising of 436 civilian and 162 Armed Forces families. A new classification method was successfully used to assign Registrar General's social classification to Armed Forces personnel. Comparison of the two study groups showed a significant difference in social class distribution (p = 0.0001). This study has devised a new method for classifying occupations within the Armed Forces to categories of social class thus permitting comparison with Registrar General's classification.

  4. Using methods from the data mining and machine learning literature for disease classification and prediction: A case study examining classification of heart failure sub-types

    PubMed Central

    Austin, Peter C.; Tu, Jack V.; Ho, Jennifer E.; Levy, Daniel; Lee, Douglas S.

    2014-01-01

    Objective Physicians classify patients into those with or without a specific disease. Furthermore, there is often interest in classifying patients according to disease etiology or subtype. Classification trees are frequently used to classify patients according to the presence or absence of a disease. However, classification trees can suffer from limited accuracy. In the data-mining and machine learning literature, alternate classification schemes have been developed. These include bootstrap aggregation (bagging), boosting, random forests, and support vector machines. Study design and Setting We compared the performance of these classification methods with those of conventional classification trees to classify patients with heart failure according to the following sub-types: heart failure with preserved ejection fraction (HFPEF) vs. heart failure with reduced ejection fraction (HFREF). We also compared the ability of these methods to predict the probability of the presence of HFPEF with that of conventional logistic regression. Results We found that modern, flexible tree-based methods from the data mining literature offer substantial improvement in prediction and classification of heart failure sub-type compared to conventional classification and regression trees. However, conventional logistic regression had superior performance for predicting the probability of the presence of HFPEF compared to the methods proposed in the data mining literature. Conclusion The use of tree-based methods offers superior performance over conventional classification and regression trees for predicting and classifying heart failure subtypes in a population-based sample of patients from Ontario. However, these methods do not offer substantial improvements over logistic regression for predicting the presence of HFPEF. PMID:23384592

  5. Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung

    2015-04-01

    In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting

  6. Ensemble Methods for Classification of Physical Activities from Wrist Accelerometry.

    PubMed

    Chowdhury, Alok Kumar; Tjondronegoro, Dian; Chandran, Vinod; Trost, Stewart G

    2017-09-01

    To investigate whether the use of ensemble learning algorithms improve physical activity recognition accuracy compared to the single classifier algorithms, and to compare the classification accuracy achieved by three conventional ensemble machine learning methods (bagging, boosting, random forest) and a custom ensemble model comprising four algorithms commonly used for activity recognition (binary decision tree, k nearest neighbor, support vector machine, and neural network). The study used three independent data sets that included wrist-worn accelerometer data. For each data set, a four-step classification framework consisting of data preprocessing, feature extraction, normalization and feature selection, and classifier training and testing was implemented. For the custom ensemble, decisions from the single classifiers were aggregated using three decision fusion methods: weighted majority vote, naïve Bayes combination, and behavior knowledge space combination. Classifiers were cross-validated using leave-one subject out cross-validation and compared on the basis of average F1 scores. In all three data sets, ensemble learning methods consistently outperformed the individual classifiers. Among the conventional ensemble methods, random forest models provided consistently high activity recognition; however, the custom ensemble model using weighted majority voting demonstrated the highest classification accuracy in two of the three data sets. Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.

  7. Processing of Antenna-Array Signals on the Basis of the Interference Model Including a Rank-Deficient Correlation Matrix

    NASA Astrophysics Data System (ADS)

    Rodionov, A. A.; Turchin, V. I.

    2017-06-01

    We propose a new method of signal processing in antenna arrays, which is called the Maximum-Likelihood Signal Classification. The proposed method is based on the model in which interference includes a component with a rank-deficient correlation matrix. Using numerical simulation, we show that the proposed method allows one to ensure variance of the estimated arrival angle of the plane wave, which is close to the Cramer-Rao lower boundary and more efficient than the best-known MUSIC method. It is also shown that the proposed technique can be efficiently used for estimating the time dependence of the useful signal.

  8. Laser interference patterning methods: Possibilities for high-throughput fabrication of periodic surface patterns

    NASA Astrophysics Data System (ADS)

    Lasagni, Andrés Fabián

    2017-06-01

    Fabrication of two- and three-dimensional (2D and 3D) structures in the micro- and nano-range allows a new degree of freedom to the design of materials by tailoring desired material properties and, thus, obtaining a superior functionality. Such complex designs are only possible using novel fabrication techniques with high resolution, even in the nanoscale range. Starting from a simple concept, transferring the shape of an interference pattern directly to the surface of a material, laser interferometric processing methods have been continuously developed. These methods enable the fabrication of repetitive periodic arrays and microstructures by irradiation of the sample surface with coherent beams of light. This article describes the capabilities of laser interference lithographic methods for the treatment of both photoresists and solid materials. Theoretical calculations are used to calculate the intensity distributions of patterns that can be realized by changing the number of interfering laser beams, their polarization, intensity and phase. Finally, different processing systems and configurations are described and, thus, demonstrating the possibility for the fast and precise tailoring of material surface microstructures and topographies on industrial relevant scales as well as several application cases for both methods.

  9. Pedigree data analysis with crossover interference.

    PubMed Central

    Browning, Sharon

    2003-01-01

    We propose a new method for calculating probabilities for pedigree genetic data that incorporates crossover interference using the chi-square models. Applications include relationship inference, genetic map construction, and linkage analysis. The method is based on importance sampling of unobserved inheritance patterns conditional on the observed genotype data and takes advantage of fast algorithms for no-interference models while using reweighting to allow for interference. We show that the method is effective for arbitrarily many markers with small pedigrees. PMID:12930760

  10. Discriminant forest classification method and system

    DOEpatents

    Chen, Barry Y.; Hanley, William G.; Lemmond, Tracy D.; Hiller, Lawrence J.; Knapp, David A.; Mugge, Marshall J.

    2012-11-06

    A hybrid machine learning methodology and system for classification that combines classical random forest (RF) methodology with discriminant analysis (DA) techniques to provide enhanced classification capability. A DA technique which uses feature measurements of an object to predict its class membership, such as linear discriminant analysis (LDA) or Andersen-Bahadur linear discriminant technique (AB), is used to split the data at each node in each of its classification trees to train and grow the trees and the forest. When training is finished, a set of n DA-based decision trees of a discriminant forest is produced for use in predicting the classification of new samples of unknown class.

  11. Comprehensive vulnerability assessment method for nodes considering anti-interference ability and influence

    NASA Astrophysics Data System (ADS)

    LUO, Jianchun; WANG, Yunyu; YANG, Jun; RAN, hong; PENG, Xiaodong; HUANG, Ming; FENG, Hao; LIU, Meijun

    2018-03-01

    The vulnerability assessment of power grid is of great significance in the current research. Power system faces many kinds of uncertainty factors, and the disturbance caused by them has become one of the main factors which restrict the safe operation of power grid. To solve this problem, considering the anti-interference ability of the system when the system is disturbed and the effect of the system when the node is out of operation, a set of index to reflect the anti-interference ability and the influence of nodes are set up. On this basis, a new comprehensive vulnerability assessment method of nodes is put forward by using super efficiency data envelopment analysis to scientific integration. Finally, the simulative results of IEEE30-bus system indicated that the proposed model is rational and valid.

  12. Sparsity-Based Representation for Classification Algorithms and Comparison Results for Transient Acoustic Signals

    DTIC Science & Technology

    2016-05-01

    large but correlated noise and signal interference (i.e., low -rank interference). Another contribution is the implementation of deep learning...representation, low rank, deep learning 52 Tung-Duong Tran-Luu 301-394-3082Unclassified Unclassified Unclassified UU ii Approved for public release; distribution...Classification of Acoustic Transients 6 3.2 Joint Sparse Representation with Low -Rank Interference 7 3.3 Simultaneous Group-and-Joint Sparse Representation

  13. A Comprehensive Study of Retinal Vessel Classification Methods in Fundus Images

    PubMed Central

    Miri, Maliheh; Amini, Zahra; Rabbani, Hossein; Kafieh, Raheleh

    2017-01-01

    Nowadays, it is obvious that there is a relationship between changes in the retinal vessel structure and diseases such as diabetic, hypertension, stroke, and the other cardiovascular diseases in adults as well as retinopathy of prematurity in infants. Retinal fundus images provide non-invasive visualization of the retinal vessel structure. Applying image processing techniques in the study of digital color fundus photographs and analyzing their vasculature is a reliable approach for early diagnosis of the aforementioned diseases. Reduction in the arteriolar–venular ratio of retina is one of the primary signs of hypertension, diabetic, and cardiovascular diseases which can be calculated by analyzing the fundus images. To achieve a precise measuring of this parameter and meaningful diagnostic results, accurate classification of arteries and veins is necessary. Classification of vessels in fundus images faces with some challenges that make it difficult. In this paper, a comprehensive study of the proposed methods for classification of arteries and veins in fundus images is presented. Considering that these methods are evaluated on different datasets and use different evaluation criteria, it is not possible to conduct a fair comparison of their performance. Therefore, we evaluate the classification methods from modeling perspective. This analysis reveals that most of the proposed approaches have focused on statistics, and geometric models in spatial domain and transform domain models have received less attention. This could suggest the possibility of using transform models, especially data adaptive ones, for modeling of the fundus images in future classification approaches. PMID:28553578

  14. HEp-2 cell image classification method based on very deep convolutional networks with small datasets

    NASA Astrophysics Data System (ADS)

    Lu, Mengchi; Gao, Long; Guo, Xifeng; Liu, Qiang; Yin, Jianping

    2017-07-01

    Human Epithelial-2 (HEp-2) cell images staining patterns classification have been widely used to identify autoimmune diseases by the anti-Nuclear antibodies (ANA) test in the Indirect Immunofluorescence (IIF) protocol. Because manual test is time consuming, subjective and labor intensive, image-based Computer Aided Diagnosis (CAD) systems for HEp-2 cell classification are developing. However, methods proposed recently are mostly manual features extraction with low accuracy. Besides, the scale of available benchmark datasets is small, which does not exactly suitable for using deep learning methods. This issue will influence the accuracy of cell classification directly even after data augmentation. To address these issues, this paper presents a high accuracy automatic HEp-2 cell classification method with small datasets, by utilizing very deep convolutional networks (VGGNet). Specifically, the proposed method consists of three main phases, namely image preprocessing, feature extraction and classification. Moreover, an improved VGGNet is presented to address the challenges of small-scale datasets. Experimental results over two benchmark datasets demonstrate that the proposed method achieves superior performance in terms of accuracy compared with existing methods.

  15. Evaluation of different distortion correction methods and interpolation techniques for an automated classification of celiac disease☆

    PubMed Central

    Gadermayr, M.; Liedlgruber, M.; Uhl, A.; Vécsei, A.

    2013-01-01

    Due to the optics used in endoscopes, a typical degradation observed in endoscopic images are barrel-type distortions. In this work we investigate the impact of methods used to correct such distortions in images on the classification accuracy in the context of automated celiac disease classification. For this purpose we compare various different distortion correction methods and apply them to endoscopic images, which are subsequently classified. Since the interpolation used in such methods is also assumed to have an influence on the resulting classification accuracies, we also investigate different interpolation methods and their impact on the classification performance. In order to be able to make solid statements about the benefit of distortion correction we use various different feature extraction methods used to obtain features for the classification. Our experiments show that it is not possible to make a clear statement about the usefulness of distortion correction methods in the context of an automated diagnosis of celiac disease. This is mainly due to the fact that an eventual benefit of distortion correction highly depends on the feature extraction method used for the classification. PMID:23981585

  16. Evaluation of normalization methods for cDNA microarray data by k-NN classification

    PubMed Central

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J

    2005-01-01

    Background Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Results Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Conclusion Using LOOCV error of k-NNs as the evaluation criterion, three

  17. Evaluation of normalization methods for cDNA microarray data by k-NN classification.

    PubMed

    Wu, Wei; Xing, Eric P; Myers, Connie; Mian, I Saira; Bissell, Mina J

    2005-07-26

    Non-biological factors give rise to unwanted variations in cDNA microarray data. There are many normalization methods designed to remove such variations. However, to date there have been few published systematic evaluations of these techniques for removing variations arising from dye biases in the context of downstream, higher-order analytical tasks such as classification. Ten location normalization methods that adjust spatial- and/or intensity-dependent dye biases, and three scale methods that adjust scale differences were applied, individually and in combination, to five distinct, published, cancer biology-related cDNA microarray data sets. Leave-one-out cross-validation (LOOCV) classification error was employed as the quantitative end-point for assessing the effectiveness of a normalization method. In particular, a known classifier, k-nearest neighbor (k-NN), was estimated from data normalized using a given technique, and the LOOCV error rate of the ensuing model was computed. We found that k-NN classifiers are sensitive to dye biases in the data. Using NONRM and GMEDIAN as baseline methods, our results show that single-bias-removal techniques which remove either spatial-dependent dye bias (referred later as spatial effect) or intensity-dependent dye bias (referred later as intensity effect) moderately reduce LOOCV classification errors; whereas double-bias-removal techniques which remove both spatial- and intensity effect reduce LOOCV classification errors even further. Of the 41 different strategies examined, three two-step processes, IGLOESS-SLFILTERW7, ISTSPLINE-SLLOESS and IGLOESS-SLLOESS, all of which removed intensity effect globally and spatial effect locally, appear to reduce LOOCV classification errors most consistently and effectively across all data sets. We also found that the investigated scale normalization methods do not reduce LOOCV classification error. Using LOOCV error of k-NNs as the evaluation criterion, three double

  18. Evaluation of AMOEBA: a spectral-spatial classification method

    USGS Publications Warehouse

    Jenson, Susan K.; Loveland, Thomas R.; Bryant, J.

    1982-01-01

    Muitispectral remotely sensed images have been treated as arbitrary multivariate spectral data for purposes of clustering and classifying. However, the spatial properties of image data can also be exploited. AMOEBA is a clustering and classification method that is based on a spatially derived model for image data. In an evaluation test, Landsat data were classified with both AMOEBA and a widely used spectral classifier. The test showed that irrigated crop types can be classified as accurately with the AMOEBA method as with the generally used spectral method ISOCLS; the AMOEBA method, however, requires less computer time.

  19. A thyroid nodule classification method based on TI-RADS

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Yang, Yang; Peng, Bo; Chen, Qin

    2017-07-01

    Thyroid Imaging Reporting and Data System(TI-RADS) is a valuable tool for differentiating the benign and the malignant thyroid nodules. In clinic, doctors can determine the extent of being benign or malignant in terms of different classes by using TI-RADS. Classification represents the degree of malignancy of thyroid nodules. TI-RADS as a classification standard can be used to guide the ultrasonic doctor to examine thyroid nodules more accurately and reliably. In this paper, we aim to classify the thyroid nodules with the help of TI-RADS. To this end, four ultrasound signs, i.e., cystic and solid, echo pattern, boundary feature and calcification of thyroid nodules are extracted and converted into feature vectors. Then semi-supervised fuzzy C-means ensemble (SS-FCME) model is applied to obtain the classification results. The experimental results demonstrate that the proposed method can help doctors diagnose the thyroid nodules effectively.

  20. Application of Classification Methods for Forecasting Mid-Term Power Load Patterns

    NASA Astrophysics Data System (ADS)

    Piao, Minghao; Lee, Heon Gyu; Park, Jin Hyoung; Ryu, Keun Ho

    Currently an automated methodology based on data mining techniques is presented for the prediction of customer load patterns in long duration load profiles. The proposed approach in this paper consists of three stages: (i) data preprocessing: noise or outlier is removed and the continuous attribute-valued features are transformed to discrete values, (ii) cluster analysis: k-means clustering is used to create load pattern classes and the representative load profiles for each class and (iii) classification: we evaluated several supervised learning methods in order to select a suitable prediction method. According to the proposed methodology, power load measured from AMR (automatic meter reading) system, as well as customer indexes, were used as inputs for clustering. The output of clustering was the classification of representative load profiles (or classes). In order to evaluate the result of forecasting load patterns, the several classification methods were applied on a set of high voltage customers of the Korea power system and derived class labels from clustering and other features are used as input to produce classifiers. Lastly, the result of our experiments was presented.

  1. Rapid method for protein quantitation by Bradford assay after elimination of the interference of polysorbate 80.

    PubMed

    Cheng, Yongfeng; Wei, Haiming; Sun, Rui; Tian, Zhigang; Zheng, Xiaodong

    2016-02-01

    Bradford assay is one of the most common methods for measuring protein concentrations. However, some pharmaceutical excipients, such as detergents, interfere with Bradford assay even at low concentrations. Protein precipitation can be used to overcome sample incompatibility with protein quantitation. But the rate of protein recovery caused by acetone precipitation is only about 70%. In this study, we found that sucrose not only could increase the rate of protein recovery after 1 h acetone precipitation, but also did not interfere with Bradford assay. So we developed a method for rapid protein quantitation in protein drugs even if they contained interfering substances. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Interference correction by extracting the information of interference dominant regions: Application to near-infrared spectra

    NASA Astrophysics Data System (ADS)

    Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen

    2014-08-01

    Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.

  3. ARSENIC DETERMINATION BY THE SILVER DIETHYLDITHIOCARBAMATE METHOD AND THE ELIMINATION OF METAL ION INTERFERENCE

    EPA Science Inventory

    The interference of metals with the determination of arsenic by the silver diethyldithiocarbamate (SDDC) Method was investigated. Low recoveries of arsenic are obtained when cobalt, chromium, molybdenum, nitrate, nickel or phosphate are at concentrations of 7 mg/l or above (indiv...

  4. A Bayesian taxonomic classification method for 16S rRNA gene sequences with improved species-level accuracy.

    PubMed

    Gao, Xiang; Lin, Huaiying; Revanna, Kashi; Dong, Qunfeng

    2017-05-10

    Species-level classification for 16S rRNA gene sequences remains a serious challenge for microbiome researchers, because existing taxonomic classification tools for 16S rRNA gene sequences either do not provide species-level classification, or their classification results are unreliable. The unreliable results are due to the limitations in the existing methods which either lack solid probabilistic-based criteria to evaluate the confidence of their taxonomic assignments, or use nucleotide k-mer frequency as the proxy for sequence similarity measurement. We have developed a method that shows significantly improved species-level classification results over existing methods. Our method calculates true sequence similarity between query sequences and database hits using pairwise sequence alignment. Taxonomic classifications are assigned from the species to the phylum levels based on the lowest common ancestors of multiple database hits for each query sequence, and further classification reliabilities are evaluated by bootstrap confidence scores. The novelty of our method is that the contribution of each database hit to the taxonomic assignment of the query sequence is weighted by a Bayesian posterior probability based upon the degree of sequence similarity of the database hit to the query sequence. Our method does not need any training datasets specific for different taxonomic groups. Instead only a reference database is required for aligning to the query sequences, making our method easily applicable for different regions of the 16S rRNA gene or other phylogenetic marker genes. Reliable species-level classification for 16S rRNA or other phylogenetic marker genes is critical for microbiome research. Our software shows significantly higher classification accuracy than the existing tools and we provide probabilistic-based confidence scores to evaluate the reliability of our taxonomic classification assignments based on multiple database matches to query sequences. Despite

  5. A new method of evaluating the side wall interference effect on airfoil angle of attack by suction from the side walls

    NASA Technical Reports Server (NTRS)

    Sawada, H.; Sakakibara, S.; Sato, M.; Kanda, H.; Karasawa, T.

    1984-01-01

    A quantitative evaluation method of the suction effect from a suction plate on side walls is explained. It is found from wind tunnel tests that the wall interference is basically described by the summation form of wall interferences in the case of two dimensional flow and the interference of side walls.

  6. Novel method of detecting movement of the interference fringes using one-dimensional PSD.

    PubMed

    Wang, Qi; Xia, Ji; Liu, Xu; Zhao, Yong

    2015-06-02

    In this paper, a method of using a one-dimensional position-sensitive detector (PSD) by replacing charge-coupled device (CCD) to measure the movement of the interference fringes is presented first, and its feasibility is demonstrated through an experimental setup based on the principle of centroid detection. Firstly, the centroid position of the interference fringes in a fiber Mach-Zehnder (M-Z) interferometer is solved in theory, showing it has a higher resolution and sensitivity. According to the physical characteristics and principles of PSD, a simulation of the interference fringe's phase difference in fiber M-Z interferometers and PSD output is carried out. Comparing the simulation results with the relationship between phase differences and centroid positions in fiber M-Z interferometers, the conclusion that the output of interference fringes by PSD is still the centroid position is obtained. Based on massive measurements, the best resolution of the system is achieved with 5.15, 625 μm. Finally, the detection system is evaluated through setup error analysis and an ultra-narrow-band filter structure. The filter structure is configured with a one-dimensional photonic crystal containing positive and negative refraction material, which can eliminate background light in the PSD detection experiment. This detection system has a simple structure, good stability, high precision and easily performs remote measurements, which makes it potentially useful in material small deformation tests, refractivity measurements of optical media and optical wave front detection.

  7. Methods of classification for women undergoing induction of labour: a systematic review and novel classification system.

    PubMed

    Nippita, T A; Khambalia, A Z; Seeho, S K; Trevena, J A; Patterson, J A; Ford, J B; Morris, J M; Roberts, C L

    2015-09-01

    A lack of reproducible methods for classifying women having an induction of labour (IOL) has led to controversies regarding IOL and related maternal and perinatal health outcomes. To evaluate articles that classify IOL and to develop a novel IOL classification system. Electronic searches using CINAHL, EMBASE, WEB of KNOWLEDGE, and reference lists. Two reviewers independently assessed studies that classified women having an IOL. For the systematic review, data were extracted on study characteristics, quality, and results. Pre-specified criteria were used for evaluation. A multidisciplinary collaboration developed a new classification system using a clinically logical model and stakeholder feedback, demonstrating applicability in a population cohort of 909 702 maternities in New South Wales, Australia, over the period 2002-2011. All seven studies included in the systematic review categorised women according to the presence or absence of varying medical indications for IOL. Evaluation identified uncertainties or deficiencies across all studies, related to the criteria of total inclusivity, reproducibility, clinical utility, implementability, and data availability. A classification system of ten groups was developed based on parity, previous caesarean, gestational age, number, and presentation of the fetus. Nulliparous and parous women at full term were the largest groups (21.2 and 24.5%, respectively), and accounted for the highest proportion of all IOL (20.7 and 21.5%, respectively). Current methods of classifying women undertaking IOL based on medical indications are inadequate. We propose a classification system that has the attributes of simplicity and clarity, uses information that is readily and reliably collected, and enables the standard characterisation of populations of women having an IOL across and within jurisdictions. © 2015 Royal College of Obstetricians and Gynaecologists.

  8. An enhanced data visualization method for diesel engine malfunction classification using multi-sensor signals.

    PubMed

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-10-21

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine.

  9. An Enhanced Data Visualization Method for Diesel Engine Malfunction Classification Using Multi-Sensor Signals

    PubMed Central

    Li, Yiqing; Wang, Yu; Zi, Yanyang; Zhang, Mingquan

    2015-01-01

    The various multi-sensor signal features from a diesel engine constitute a complex high-dimensional dataset. The non-linear dimensionality reduction method, t-distributed stochastic neighbor embedding (t-SNE), provides an effective way to implement data visualization for complex high-dimensional data. However, irrelevant features can deteriorate the performance of data visualization, and thus, should be eliminated a priori. This paper proposes a feature subset score based t-SNE (FSS-t-SNE) data visualization method to deal with the high-dimensional data that are collected from multi-sensor signals. In this method, the optimal feature subset is constructed by a feature subset score criterion. Then the high-dimensional data are visualized in 2-dimension space. According to the UCI dataset test, FSS-t-SNE can effectively improve the classification accuracy. An experiment was performed with a large power marine diesel engine to validate the proposed method for diesel engine malfunction classification. Multi-sensor signals were collected by a cylinder vibration sensor and a cylinder pressure sensor. Compared with other conventional data visualization methods, the proposed method shows good visualization performance and high classification accuracy in multi-malfunction classification of a diesel engine. PMID:26506347

  10. [Galaxy/quasar classification based on nearest neighbor method].

    PubMed

    Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun

    2011-09-01

    With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification.

  11. Study on Interference Suppression Algorithms for Electronic Noses: A Review

    PubMed Central

    Liang, Zhifang; Zhang, Ci; Sun, Hao; Liu, Tao

    2018-01-01

    Electronic noses (e-nose) are composed of an appropriate pattern recognition system and a gas sensor array with a certain degree of specificity and broad spectrum characteristics. The gas sensors have their own shortcomings of being highly sensitive to interferences which has an impact on the detection of target gases. When there are interferences, the performance of the e-nose will deteriorate. Therefore, it is urgent to study interference suppression techniques for e-noses. This paper summarizes the sources of interferences and reviews the advances made in recent years in interference suppression for e-noses. According to the factors which cause interference, interferences can be classified into two types: interference caused by changes of operating conditions and interference caused by hardware failures. The existing suppression methods were summarized and analyzed from these two aspects. Since the interferences of e-noses are uncertain and unstable, it can be found that some nonlinear methods have good effects for interference suppression, such as methods based on transfer learning, adaptive methods, etc. PMID:29649152

  12. Classification of Partial Discharge Signals by Combining Adaptive Local Iterative Filtering and Entropy Features

    PubMed Central

    Morison, Gordon; Boreham, Philip

    2018-01-01

    Electromagnetic Interference (EMI) is a technique for capturing Partial Discharge (PD) signals in High-Voltage (HV) power plant apparatus. EMI signals can be non-stationary which makes their analysis difficult, particularly for pattern recognition applications. This paper elaborates upon a previously developed software condition-monitoring model for improved EMI events classification based on time-frequency signal decomposition and entropy features. The idea of the proposed method is to map multiple discharge source signals captured by EMI and labelled by experts, including PD, from the time domain to a feature space, which aids in the interpretation of subsequent fault information. Here, instead of using only one permutation entropy measure, a more robust measure, called Dispersion Entropy (DE), is added to the feature vector. Multi-Class Support Vector Machine (MCSVM) methods are utilized for classification of the different discharge sources. Results show an improved classification accuracy compared to previously proposed methods. This yields to a successful development of an expert’s knowledge-based intelligent system. Since this method is demonstrated to be successful with real field data, it brings the benefit of possible real-world application for EMI condition monitoring. PMID:29385030

  13. Computer-aided diagnosis system: a Bayesian hybrid classification method.

    PubMed

    Calle-Alonso, F; Pérez, C J; Arias-Nicolás, J P; Martín, J

    2013-10-01

    A novel method to classify multi-class biomedical objects is presented. The method is based on a hybrid approach which combines pairwise comparison, Bayesian regression and the k-nearest neighbor technique. It can be applied in a fully automatic way or in a relevance feedback framework. In the latter case, the information obtained from both an expert and the automatic classification is iteratively used to improve the results until a certain accuracy level is achieved, then, the learning process is finished and new classifications can be automatically performed. The method has been applied in two biomedical contexts by following the same cross-validation schemes as in the original studies. The first one refers to cancer diagnosis, leading to an accuracy of 77.35% versus 66.37%, originally obtained. The second one considers the diagnosis of pathologies of the vertebral column. The original method achieves accuracies ranging from 76.5% to 96.7%, and from 82.3% to 97.1% in two different cross-validation schemes. Even with no supervision, the proposed method reaches 96.71% and 97.32% in these two cases. By using a supervised framework the achieved accuracy is 97.74%. Furthermore, all abnormal cases were correctly classified. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. Real-time Java simulations of multiple interference dielectric filters

    NASA Astrophysics Data System (ADS)

    Kireev, Alexandre N.; Martin, Olivier J. F.

    2008-12-01

    An interactive Java applet for real-time simulation and visualization of the transmittance properties of multiple interference dielectric filters is presented. The most commonly used interference filters as well as the state-of-the-art ones are embedded in this platform-independent applet which can serve research and education purposes. The Transmittance applet can be freely downloaded from the site http://cpc.cs.qub.ac.uk. Program summaryProgram title: Transmittance Catalogue identifier: AEBQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5778 No. of bytes in distributed program, including test data, etc.: 90 474 Distribution format: tar.gz Programming language: Java Computer: Developed on PC-Pentium platform Operating system: Any Java-enabled OS. Applet was tested on Windows ME, XP, Sun Solaris, Mac OS RAM: Variable Classification: 18 Nature of problem: Sophisticated wavelength selective multiple interference filters can include some tens or even hundreds of dielectric layers. The spectral response of such a stack is not obvious. On the other hand, there is a strong demand from application designers and students to get a quick insight into the properties of a given filter. Solution method: A Java applet was developed for the computation and the visualization of the transmittance of multilayer interference filters. It is simple to use and the embedded filter library can serve educational purposes. Also, its ability to handle complex structures will be appreciated as a useful research and development tool. Running time: Real-time simulations

  15. Machine learning methods can replace 3D profile method in classification of amyloidogenic hexapeptides.

    PubMed

    Stanislawski, Jerzy; Kotulska, Malgorzata; Unold, Olgierd

    2013-01-17

    Amyloids are proteins capable of forming fibrils. Many of them underlie serious diseases, like Alzheimer disease. The number of amyloid-associated diseases is constantly increasing. Recent studies indicate that amyloidogenic properties can be associated with short segments of aminoacids, which transform the structure when exposed. A few hundreds of such peptides have been experimentally found. Experimental testing of all possible aminoacid combinations is currently not feasible. Instead, they can be predicted by computational methods. 3D profile is a physicochemical-based method that has generated the most numerous dataset - ZipperDB. However, it is computationally very demanding. Here, we show that dataset generation can be accelerated. Two methods to increase the classification efficiency of amyloidogenic candidates are presented and tested: simplified 3D profile generation and machine learning methods. We generated a new dataset of hexapeptides, using more economical 3D profile algorithm, which showed very good classification overlap with ZipperDB (93.5%). The new part of our dataset contains 1779 segments, with 204 classified as amyloidogenic. The dataset of 6-residue sequences with their binary classification, based on the energy of the segment, was applied for training machine learning methods. A separate set of sequences from ZipperDB was used as a test set. The most effective methods were Alternating Decision Tree and Multilayer Perceptron. Both methods obtained area under ROC curve of 0.96, accuracy 91%, true positive rate ca. 78%, and true negative rate 95%. A few other machine learning methods also achieved a good performance. The computational time was reduced from 18-20 CPU-hours (full 3D profile) to 0.5 CPU-hours (simplified 3D profile) to seconds (machine learning). We showed that the simplified profile generation method does not introduce an error with regard to the original method, while increasing the computational efficiency. Our new dataset

  16. Site Classification using Multichannel Channel Analysis of Surface Wave (MASW) method on Soft and Hard Ground

    NASA Astrophysics Data System (ADS)

    Ashraf, M. A. M.; Kumar, N. S.; Yusoh, R.; Hazreek, Z. A. M.; Aziman, M.

    2018-04-01

    Site classification utilizing average shear wave velocity (Vs(30) up to 30 meters depth is a typical parameter. Numerous geophysical methods have been proposed for estimation of shear wave velocity by utilizing assortment of testing configuration, processing method, and inversion algorithm. Multichannel Analysis of Surface Wave (MASW) method is been rehearsed by numerous specialist and professional to geotechnical engineering for local site characterization and classification. This study aims to determine the site classification on soft and hard ground using MASW method. The subsurface classification was made utilizing National Earthquake Hazards Reduction Program (NERHP) and international Building Code (IBC) classification. Two sites are chosen to acquire the shear wave velocity which is in the state of Pulau Pinang for soft soil and Perlis for hard rock. Results recommend that MASW technique can be utilized to spatially calculate the distribution of shear wave velocity (Vs(30)) in soil and rock to characterize areas.

  17. Urban Image Classification: Per-Pixel Classifiers, Sub-Pixel Analysis, Object-Based Image Analysis, and Geospatial Methods. 10; Chapter

    NASA Technical Reports Server (NTRS)

    Myint, Soe W.; Mesev, Victor; Quattrochi, Dale; Wentz, Elizabeth A.

    2013-01-01

    Remote sensing methods used to generate base maps to analyze the urban environment rely predominantly on digital sensor data from space-borne platforms. This is due in part from new sources of high spatial resolution data covering the globe, a variety of multispectral and multitemporal sources, sophisticated statistical and geospatial methods, and compatibility with GIS data sources and methods. The goal of this chapter is to review the four groups of classification methods for digital sensor data from space-borne platforms; per-pixel, sub-pixel, object-based (spatial-based), and geospatial methods. Per-pixel methods are widely used methods that classify pixels into distinct categories based solely on the spectral and ancillary information within that pixel. They are used for simple calculations of environmental indices (e.g., NDVI) to sophisticated expert systems to assign urban land covers. Researchers recognize however, that even with the smallest pixel size the spectral information within a pixel is really a combination of multiple urban surfaces. Sub-pixel classification methods therefore aim to statistically quantify the mixture of surfaces to improve overall classification accuracy. While within pixel variations exist, there is also significant evidence that groups of nearby pixels have similar spectral information and therefore belong to the same classification category. Object-oriented methods have emerged that group pixels prior to classification based on spectral similarity and spatial proximity. Classification accuracy using object-based methods show significant success and promise for numerous urban 3 applications. Like the object-oriented methods that recognize the importance of spatial proximity, geospatial methods for urban mapping also utilize neighboring pixels in the classification process. The primary difference though is that geostatistical methods (e.g., spatial autocorrelation methods) are utilized during both the pre- and post-classification

  18. Flexible and inflexible task sets: asymmetric interference when switching between emotional expression, sex, and age classification of perceived faces.

    PubMed

    Schuch, Stefanie; Werheid, Katja; Koch, Iring

    2012-01-01

    The present study investigated whether the processing characteristics of categorizing emotional facial expressions are different from those of categorizing facial age and sex information. Given that emotions change rapidly, it was hypothesized that processing facial expressions involves a more flexible task set that causes less between-task interference than the task sets involved in processing age or sex of a face. Participants switched between three tasks: categorizing a face as looking happy or angry (emotion task), young or old (age task), and male or female (sex task). Interference between tasks was measured by global interference and response interference. Both measures revealed patterns of asymmetric interference. Global between-task interference was reduced when a task was mixed with the emotion task. Response interference, as measured by congruency effects, was larger for the emotion task than for the nonemotional tasks. The results support the idea that processing emotional facial expression constitutes a more flexible task set that causes less interference (i.e., task-set "inertia") than processing the age or sex of a face.

  19. Discriminability effect on Garner interference: evidence from recognition of facial identity and expression

    PubMed Central

    Wang, Yamin; Fu, Xiaolan; Johnston, Robert A.; Yan, Zheng

    2013-01-01

    Using Garner’s speeded classification task existing studies demonstrated an asymmetric interference in the recognition of facial identity and facial expression. It seems that expression is hard to interfere with identity recognition. However, discriminability of identity and expression, a potential confounding variable, had not been carefully examined in existing studies. In current work, we manipulated discriminability of identity and expression by matching facial shape (long or round) in identity and matching mouth (opened or closed) in facial expression. Garner interference was found either from identity to expression (Experiment 1) or from expression to identity (Experiment 2). Interference was also found in both directions (Experiment 3) or in neither direction (Experiment 4). The results support that Garner interference tends to occur under condition of low discriminability of relevant dimension regardless of facial property. Our findings indicate that Garner interference is not necessarily related to interdependent processing in recognition of facial identity and expression. The findings also suggest that discriminability as a mediating factor should be carefully controlled in future research. PMID:24391609

  20. Application of texture analysis method for mammogram density classification

    NASA Astrophysics Data System (ADS)

    Nithya, R.; Santhi, B.

    2017-07-01

    Mammographic density is considered a major risk factor for developing breast cancer. This paper proposes an automated approach to classify breast tissue types in digital mammogram. The main objective of the proposed Computer-Aided Diagnosis (CAD) system is to investigate various feature extraction methods and classifiers to improve the diagnostic accuracy in mammogram density classification. Texture analysis methods are used to extract the features from the mammogram. Texture features are extracted by using histogram, Gray Level Co-Occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Level Difference Matrix (GLDM), Local Binary Pattern (LBP), Entropy, Discrete Wavelet Transform (DWT), Wavelet Packet Transform (WPT), Gabor transform and trace transform. These extracted features are selected using Analysis of Variance (ANOVA). The features selected by ANOVA are fed into the classifiers to characterize the mammogram into two-class (fatty/dense) and three-class (fatty/glandular/dense) breast density classification. This work has been carried out by using the mini-Mammographic Image Analysis Society (MIAS) database. Five classifiers are employed namely, Artificial Neural Network (ANN), Linear Discriminant Analysis (LDA), Naive Bayes (NB), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM). Experimental results show that ANN provides better performance than LDA, NB, KNN and SVM classifiers. The proposed methodology has achieved 97.5% accuracy for three-class and 99.37% for two-class density classification.

  1. An Evaluation of Feature Learning Methods for High Resolution Image Classification

    NASA Astrophysics Data System (ADS)

    Tokarczyk, P.; Montoya, J.; Schindler, K.

    2012-07-01

    Automatic image classification is one of the fundamental problems of remote sensing research. The classification problem is even more challenging in high-resolution images of urban areas, where the objects are small and heterogeneous. Two questions arise, namely which features to extract from the raw sensor data to capture the local radiometry and image structure at each pixel or segment, and which classification method to apply to the feature vectors. While classifiers are nowadays well understood, selecting the right features remains a largely empirical process. Here we concentrate on the features. Several methods are evaluated which allow one to learn suitable features from unlabelled image data by analysing the image statistics. In a comparative study, we evaluate unsupervised feature learning with different linear and non-linear learning methods, including principal component analysis (PCA) and deep belief networks (DBN). We also compare these automatically learned features with popular choices of ad-hoc features including raw intensity values, standard combinations like the NDVI, a few PCA channels, and texture filters. The comparison is done in a unified framework using the same images, the target classes, reference data and a Random Forest classifier.

  2. Interference method for obtaining the potential flow past an arbitrary cascade of airfoils

    NASA Technical Reports Server (NTRS)

    Katzoff, S; Finn, Robert S; Laurence, James C

    1947-01-01

    A procedure is presented for obtaining the pressure distribution on an arbitrary airfoil section in cascade in a two-dimensional, incompressible, and nonviscous flow. The method considers directly the influence on a given airfoil of the rest of the cascade and evaluates this interference by an iterative process, which appeared to converge rapidly in the cases tried (about unit solidity, stagger angles of 0 degree and 45 degrees). Two variations of the basic interference calculations are described. One, which is accurate enough for most purposes, involves the substitution of sources, sinks, and vortices for the interfering airfoils; the other, which may be desirable for the final approximation, involves a contour integration. The computations are simplified by the use of a chart presented by Betz in a related paper. Illustrated examples are included.

  3. Data Processing And Machine Learning Methods For Multi-Modal Operator State Classification Systems

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan A.

    2015-01-01

    This document is intended as an introduction to a set of common signal processing learning methods that may be used in the software portion of a functional crew state monitoring system. This includes overviews of both the theory of the methods involved, as well as examples of implementation. Practical considerations are discussed for implementing modular, flexible, and scalable processing and classification software for a multi-modal, multi-channel monitoring system. Example source code is also given for all of the discussed processing and classification methods.

  4. Evaluation of different classification methods for the diagnosis of schizophrenia based on functional near-infrared spectroscopy.

    PubMed

    Li, Zhaohua; Wang, Yuduo; Quan, Wenxiang; Wu, Tongning; Lv, Bin

    2015-02-15

    Based on near-infrared spectroscopy (NIRS), recent converging evidence has been observed that patients with schizophrenia exhibit abnormal functional activities in the prefrontal cortex during a verbal fluency task (VFT). Therefore, some studies have attempted to employ NIRS measurements to differentiate schizophrenia patients from healthy controls with different classification methods. However, no systematic evaluation was conducted to compare their respective classification performances on the same study population. In this study, we evaluated the classification performance of four classification methods (including linear discriminant analysis, k-nearest neighbors, Gaussian process classifier, and support vector machines) on an NIRS-aided schizophrenia diagnosis. We recruited a large sample of 120 schizophrenia patients and 120 healthy controls and measured the hemoglobin response in the prefrontal cortex during the VFT using a multichannel NIRS system. Features for classification were extracted from three types of NIRS data in each channel. We subsequently performed a principal component analysis (PCA) for feature selection prior to comparison of the different classification methods. We achieved a maximum accuracy of 85.83% and an overall mean accuracy of 83.37% using a PCA-based feature selection on oxygenated hemoglobin signals and support vector machine classifier. This is the first comprehensive evaluation of different classification methods for the diagnosis of schizophrenia based on different types of NIRS signals. Our results suggested that, using the appropriate classification method, NIRS has the potential capacity to be an effective objective biomarker for the diagnosis of schizophrenia. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Implicit and Explicit Number-Space Associations Differentially Relate to Interference Control in Young Adults With ADHD

    PubMed Central

    Georges, Carrie; Hoffmann, Danielle; Schiltz, Christine

    2018-01-01

    Behavioral evidence for the link between numerical and spatial representations comes from the spatial-numerical association of response codes (SNARC) effect, consisting in faster reaction times to small/large numbers with the left/right hand respectively. The SNARC effect is, however, characterized by considerable intra- and inter-individual variability. It depends not only on the explicit or implicit nature of the numerical task, but also relates to interference control. To determine whether the prevalence of the latter relation in the elderly could be ascribed to younger individuals’ ceiling performances on executive control tasks, we determined whether the SNARC effect related to Stroop and/or Flanker effects in 26 young adults with ADHD. We observed a divergent pattern of correlation depending on the type of numerical task used to assess the SNARC effect and the type of interference control measure involved in number-space associations. Namely, stronger number-space associations during parity judgments involving implicit magnitude processing related to weaker interference control in the Stroop but not Flanker task. Conversely, stronger number-space associations during explicit magnitude classifications tended to be associated with better interference control in the Flanker but not Stroop paradigm. The association of stronger parity and magnitude SNARC effects with weaker and better interference control respectively indicates that different mechanisms underlie these relations. Activation of the magnitude-associated spatial code is irrelevant and potentially interferes with parity judgments, but in contrast assists explicit magnitude classifications. Altogether, the present study confirms the contribution of interference control to number-space associations also in young adults. It suggests that magnitude-associated spatial codes in implicit and explicit tasks are monitored by different interference control mechanisms, thereby explaining task-related intra

  6. Railroad Classification Yard Technology Manual. Volume I : Yard Design Methods

    DOT National Transportation Integrated Search

    1981-02-01

    This volume documents the procedures and methods associated with the design of railroad classification yards. Subjects include: site location, economic analysis, yard capacity analysis, design of flat yards, overall configuration of hump yards, hump ...

  7. Teaching/Learning Methods and Students' Classification of Food Items

    ERIC Educational Resources Information Center

    Hamilton-Ekeke, Joy-Telu; Thomas, Malcolm

    2011-01-01

    Purpose: This study aims to investigate the effectiveness of a teaching method (TLS (Teaching/Learning Sequence)) based on a social constructivist paradigm on students' conceptualisation of classification of food. Design/methodology/approach: The study compared the TLS model developed by the researcher based on the social constructivist paradigm…

  8. A FBG pulse wave demodulation method based on PCF modal interference filter

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Xu, Shan; Shen, Ziqi; Zhao, Junfa; Miao, Changyun; Bai, Hua

    2016-10-01

    Fiber optic sensor embedded in textiles has been a new direction of researching smart wearable technology. Pulse signal which is generated by heart beat contains vast amounts of physio-pathological information about the cardiovascular system. Therefore, the research for textile-based fiber optic sensor which can detect pulse wave has far-reaching effects on early discovery and timely treatment of cardiovascular diseases. A novel wavelength demodulation method based on photonic crystal fiber (PCF) modal interference filter is proposed for the purpose of developing FBG pulse wave sensing system embedded in smart clothing. The mechanism of the PCF modal interference and the principle of wavelength demodulation based on In-line Mach-Zehnder interferometer (In-line MZI) are analyzed in theory. The fabricated PCF modal interferometer has the advantages of good repeatability and low temperature sensitivity of 3.5pm/°C from 25°C to 60°C. The designed demodulation system can achieve linear demodulation in the range of 2nm, with the wavelength resolution of 2.2pm and the wavelength sensitivity of 0.055nm-1. The actual experiments' result indicates that the pulse wave can be well detected by this demodulation method, which is in accordance with the commercial demodulation instrument (SM130) and more sensitive than the traditional piezoelectric pulse sensor. This demodulation method provides important references for the research of smart clothing based on fiber grating sensor embedded in textiles and accelerates the developments of wearable fiber optic sensors technology.

  9. Screening tests for hazard classification of complex waste materials - Selection of methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weltens, R., E-mail: reinhilde.weltens@vito.be; Vanermen, G.; Tirez, K.

    In this study we describe the development of an alternative methodology for hazard characterization of waste materials. Such an alternative methodology for hazard assessment of complex waste materials is urgently needed, because the lack of a validated instrument leads to arbitrary hazard classification of such complex waste materials. False classification can lead to human and environmental health risks and also has important financial consequences for the waste owner. The Hazardous Waste Directive (HWD) describes the methodology for hazard classification of waste materials. For mirror entries the HWD classification is based upon the hazardous properties (H1-15) of the waste which canmore » be assessed from the hazardous properties of individual identified waste compounds or - if not all compounds are identified - from test results of hazard assessment tests performed on the waste material itself. For the latter the HWD recommends toxicity tests that were initially designed for risk assessment of chemicals in consumer products (pharmaceuticals, cosmetics, biocides, food, etc.). These tests (often using mammals) are not designed nor suitable for the hazard characterization of waste materials. With the present study we want to contribute to the development of an alternative and transparent test strategy for hazard assessment of complex wastes that is in line with the HWD principles for waste classification. It is necessary to cope with this important shortcoming in hazardous waste classification and to demonstrate that alternative methods are available that can be used for hazard assessment of waste materials. Next, by describing the pros and cons of the available methods, and by identifying the needs for additional or further development of test methods, we hope to stimulate research efforts and development in this direction. In this paper we describe promising techniques and argument on the test selection for the pilot study that we have performed on different

  10. Controlling a human-computer interface system with a novel classification method that uses electrooculography signals.

    PubMed

    Wu, Shang-Lin; Liao, Lun-De; Lu, Shao-Wei; Jiang, Wei-Ling; Chen, Shi-An; Lin, Chin-Teng

    2013-08-01

    Electrooculography (EOG) signals can be used to control human-computer interface (HCI) systems, if properly classified. The ability to measure and process these signals may help HCI users to overcome many of the physical limitations and inconveniences in daily life. However, there are currently no effective multidirectional classification methods for monitoring eye movements. Here, we describe a classification method used in a wireless EOG-based HCI device for detecting eye movements in eight directions. This device includes wireless EOG signal acquisition components, wet electrodes and an EOG signal classification algorithm. The EOG classification algorithm is based on extracting features from the electrical signals corresponding to eight directions of eye movement (up, down, left, right, up-left, down-left, up-right, and down-right) and blinking. The recognition and processing of these eight different features were achieved in real-life conditions, demonstrating that this device can reliably measure the features of EOG signals. This system and its classification procedure provide an effective method for identifying eye movements. Additionally, it may be applied to study eye functions in real-life conditions in the near future.

  11. Image Classification Workflow Using Machine Learning Methods

    NASA Astrophysics Data System (ADS)

    Christoffersen, M. S.; Roser, M.; Valadez-Vergara, R.; Fernández-Vega, J. A.; Pierce, S. A.; Arora, R.

    2016-12-01

    Recent increases in the availability and quality of remote sensing datasets have fueled an increasing number of scientifically significant discoveries based on land use classification and land use change analysis. However, much of the software made to work with remote sensing data products, specifically multispectral images, is commercial and often prohibitively expensive. The free to use solutions that are currently available come bundled up as small parts of much larger programs that are very susceptible to bugs and difficult to install and configure. What is needed is a compact, easy to use set of tools to perform land use analysis on multispectral images. To address this need, we have developed software using the Python programming language with the sole function of land use classification and land use change analysis. We chose Python to develop our software because it is relatively readable, has a large body of relevant third party libraries such as GDAL and Spectral Python, and is free to install and use on Windows, Linux, and Macintosh operating systems. In order to test our classification software, we performed a K-means unsupervised classification, Gaussian Maximum Likelihood supervised classification, and a Mahalanobis Distance based supervised classification. The images used for testing were three Landsat rasters of Austin, Texas with a spatial resolution of 60 meters for the years of 1984 and 1999, and 30 meters for the year 2015. The testing dataset was easily downloaded using the Earth Explorer application produced by the USGS. The software should be able to perform classification based on any set of multispectral rasters with little to no modification. Our software makes the ease of land use classification using commercial software available without an expensive license.

  12. Parallel interference cancellation for CDMA applications

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Simon, Marvin K. (Inventor); Raphaeli, Dan (Inventor)

    1997-01-01

    The present invention provides a method of decoding a spread spectrum composite signal, the composite signal comprising plural user signals that have been spread with plural respective codes, wherein each coded signal is despread, averaged to produce a signal value, analyzed to produce a tentative decision, respread, summed with other respread signals to produce combined interference signals, the method comprising scaling the combined interference signals with a weighting factor to produce a scaled combined interference signal, scaling the composite signal with the weighting factor to produce a scaled composite signal, scaling the signal value by the complement of the weighting factor to produce a leakage signal, combining the scaled composite signal, the scaled combined interference signal and the leakage signal to produce an estimate of a respective user signal.

  13. Ethical Perspectives on RNA Interference Therapeutics

    PubMed Central

    Ebbesen, Mette; Jensen, Thomas G.; Andersen, Svend; Pedersen, Finn Skou

    2008-01-01

    RNA interference is a mechanism for controlling normal gene expression which has recently begun to be employed as a potential therapeutic agent for a wide range of disorders, including cancer, infectious diseases and metabolic disorders. Clinical trials with RNA interference have begun. However, challenges such as off-target effects, toxicity and safe delivery methods have to be overcome before RNA interference can be considered as a conventional drug. So, if RNA interference is to be used therapeutically, we should perform a risk-benefit analysis. It is ethically relevant to perform a risk-benefit analysis since ethical obligations about not inflicting harm and promoting good are generally accepted. But the ethical issues in RNA interference therapeutics not only include a risk-benefit analysis, but also considerations about respecting the autonomy of the patient and considerations about justice with regard to the inclusion criteria for participation in clinical trials and health care allocation. RNA interference is considered a new and promising therapeutic approach, but the ethical issues of this method have not been greatly discussed, so this article analyses these issues using the bioethical theory of principles of the American bioethicists, Tom L. Beauchamp and James F. Childress. PMID:18612370

  14. CBM Resources/reserves classification and evaluation based on PRMS rules

    NASA Astrophysics Data System (ADS)

    Fa, Guifang; Yuan, Ruie; Wang, Zuoqian; Lan, Jun; Zhao, Jian; Xia, Mingjun; Cai, Dechao; Yi, Yanjing

    2018-02-01

    This paper introduces a set of definitions and classification requirements for coalbed methane (CBM) resources/reserves, based on Petroleum Resources Management System (PRMS). The basic CBM classification criterions of 1P, 2P, 3P and contingent resources are put forward from the following aspects: ownership, project maturity, drilling requirements, testing requirements, economic requirements, infrastructure and market, timing of production and development, and so on. The volumetric method is used to evaluate the OGIP, with focuses on analyses of key parameters and principles of the parameter selection, such as net thickness, ash and water content, coal rank and composition, coal density, cleat volume and saturation and absorbed gas content etc. A dynamic method is used to assess the reserves and recovery efficiency. Since the differences in rock and fluid properties, displacement mechanism, completion and operating practices and wellbore type resulted in different production curve characteristics, the factors affecting production behavior, the dewatering period, pressure build-up and interference effects were analyzed. The conclusion and results that the paper achieved can be used as important references for reasonable assessment of CBM resources/reserves.

  15. Performance Analysis of Classification Methods for Indoor Localization in Vlc Networks

    NASA Astrophysics Data System (ADS)

    Sánchez-Rodríguez, D.; Alonso-González, I.; Sánchez-Medina, J.; Ley-Bosch, C.; Díaz-Vilariño, L.

    2017-09-01

    Indoor localization has gained considerable attention over the past decade because of the emergence of numerous location-aware services. Research works have been proposed on solving this problem by using wireless networks. Nevertheless, there is still much room for improvement in the quality of the proposed classification models. In the last years, the emergence of Visible Light Communication (VLC) brings a brand new approach to high quality indoor positioning. Among its advantages, this new technology is immune to electromagnetic interference and has the advantage of having a smaller variance of received signal power compared to RF based technologies. In this paper, a performance analysis of seventeen machine leaning classifiers for indoor localization in VLC networks is carried out. The analysis is accomplished in terms of accuracy, average distance error, computational cost, training size, precision and recall measurements. Results show that most of classifiers harvest an accuracy above 90 %. The best tested classifier yielded a 99.0 % accuracy, with an average error distance of 0.3 centimetres.

  16. Multi-label spacecraft electrical signal classification method based on DBN and random forest

    PubMed Central

    Li, Ke; Yu, Nan; Li, Pengfei; Song, Shimin; Wu, Yalei; Li, Yang; Liu, Meng

    2017-01-01

    In spacecraft electrical signal characteristic data, there exists a large amount of data with high-dimensional features, a high computational complexity degree, and a low rate of identification problems, which causes great difficulty in fault diagnosis of spacecraft electronic load systems. This paper proposes a feature extraction method that is based on deep belief networks (DBN) and a classification method that is based on the random forest (RF) algorithm; The proposed algorithm mainly employs a multi-layer neural network to reduce the dimension of the original data, and then, classification is applied. Firstly, we use the method of wavelet denoising, which was used to pre-process the data. Secondly, the deep belief network is used to reduce the feature dimension and improve the rate of classification for the electrical characteristics data. Finally, we used the random forest algorithm to classify the data and comparing it with other algorithms. The experimental results show that compared with other algorithms, the proposed method shows excellent performance in terms of accuracy, computational efficiency, and stability in addressing spacecraft electrical signal data. PMID:28486479

  17. Multi-label spacecraft electrical signal classification method based on DBN and random forest.

    PubMed

    Li, Ke; Yu, Nan; Li, Pengfei; Song, Shimin; Wu, Yalei; Li, Yang; Liu, Meng

    2017-01-01

    In spacecraft electrical signal characteristic data, there exists a large amount of data with high-dimensional features, a high computational complexity degree, and a low rate of identification problems, which causes great difficulty in fault diagnosis of spacecraft electronic load systems. This paper proposes a feature extraction method that is based on deep belief networks (DBN) and a classification method that is based on the random forest (RF) algorithm; The proposed algorithm mainly employs a multi-layer neural network to reduce the dimension of the original data, and then, classification is applied. Firstly, we use the method of wavelet denoising, which was used to pre-process the data. Secondly, the deep belief network is used to reduce the feature dimension and improve the rate of classification for the electrical characteristics data. Finally, we used the random forest algorithm to classify the data and comparing it with other algorithms. The experimental results show that compared with other algorithms, the proposed method shows excellent performance in terms of accuracy, computational efficiency, and stability in addressing spacecraft electrical signal data.

  18. Interference Method for Obtaining the Potential Flow Past an Arbitrary Cascade of Airfoils

    DTIC Science & Technology

    1947-05-01

    1 ’ m2?mN copy.*”..... .#,. NATIONALADVISORYCOMMITTEE FORAERONAUTICS TECHNICALNOTE No. 1252 INTERFERENCE METHOD FOR OBTAI?N’ENGTHE POTENTIAL FLOW PAST...infinitelattices.Mostofthesestudiesinvolveapproximate procedures(foremmple,references 1 to 3) or presentsolutions forspecialclassesofshapes...ofinterferingbodies, . b u . NACATNNO,1252 v r. K v 7 m z Z* 1 t c c]. a ~ 0 s P u x * a ~ AP c) circtiation IBq@ng-fUnctilxl localvelocity

  19. A kernel function method for computing steady and oscillatory supersonic aerodynamics with interference.

    NASA Technical Reports Server (NTRS)

    Cunningham, A. M., Jr.

    1973-01-01

    The method presented uses a collocation technique with the nonplanar kernel function to solve supersonic lifting surface problems with and without interference. A set of pressure functions are developed based on conical flow theory solutions which account for discontinuities in the supersonic pressure distributions. These functions permit faster solution convergence than is possible with conventional supersonic pressure functions. An improper integral of a 3/2 power singularity along the Mach hyperbola of the nonplanar supersonic kernel function is described and treated. The method is compared with other theories and experiment for a variety of cases.

  20. An Automatic User-Adapted Physical Activity Classification Method Using Smartphones.

    PubMed

    Li, Pengfei; Wang, Yu; Tian, Yu; Zhou, Tian-Shu; Li, Jing-Song

    2017-03-01

    In recent years, an increasing number of people have become concerned about their health. Most chronic diseases are related to lifestyle, and daily activity records can be used as an important indicator of health. Specifically, using advanced technology to automatically monitor actual activities can effectively prevent and manage chronic diseases. The data used in this paper were obtained from acceleration sensors and gyroscopes integrated in smartphones. We designed an efficient Adaboost-Stump running on a smartphone to classify five common activities: cycling, running, sitting, standing, and walking and achieved a satisfactory classification accuracy of 98%. We designed an online learning method, and the classification model requires continuous training with actual data. The parameters in the model then become increasingly fitted to the specific user, which allows the classification accuracy to reach 95% under different use environments. In addition, this paper also utilized the OpenCL framework to design the program in parallel. This process can enhance the computing efficiency approximately ninefold.

  1. A Discriminant Distance Based Composite Vector Selection Method for Odor Classification

    PubMed Central

    Choi, Sang-Il; Jeong, Gu-Min

    2014-01-01

    We present a composite vector selection method for an effective electronic nose system that performs well even in noisy environments. Each composite vector generated from a electronic nose data sample is evaluated by computing the discriminant distance. By quantitatively measuring the amount of discriminative information in each composite vector, composite vectors containing informative variables can be distinguished and the final composite features for odor classification are extracted using the selected composite vectors. Using the only informative composite vectors can be also helpful to extract better composite features instead of using all the generated composite vectors. Experimental results with different volatile organic compound data show that the proposed system has good classification performance even in a noisy environment compared to other methods. PMID:24747735

  2. An efficient classification method based on principal component and sparse representation.

    PubMed

    Zhai, Lin; Fu, Shujun; Zhang, Caiming; Liu, Yunxian; Wang, Lu; Liu, Guohua; Yang, Mingqiang

    2016-01-01

    As an important application in optical imaging, palmprint recognition is interfered by many unfavorable factors. An effective fusion of blockwise bi-directional two-dimensional principal component analysis and grouping sparse classification is presented. The dimension reduction and normalizing are implemented by the blockwise bi-directional two-dimensional principal component analysis for palmprint images to extract feature matrixes, which are assembled into an overcomplete dictionary in sparse classification. A subspace orthogonal matching pursuit algorithm is designed to solve the grouping sparse representation. Finally, the classification result is gained by comparing the residual between testing and reconstructed images. Experiments are carried out on a palmprint database, and the results show that this method has better robustness against position and illumination changes of palmprint images, and can get higher rate of palmprint recognition.

  3. Localizing semantic interference from distractor sounds in picture naming: A dual-task study.

    PubMed

    Mädebach, Andreas; Kieseler, Marie-Luise; Jescheniak, Jörg D

    2017-10-13

    In this study we explored the locus of semantic interference in a novel picture-sound interference task in which participants name pictures while ignoring environmental distractor sounds. In a previous study using this task (Mädebach, Wöhner, Kieseler, & Jescheniak, in Journal of Experimental Psychology: Human Perception and Performance, 43, 1629-1646, 2017), we showed that semantically related distractor sounds (e.g., BARKING dog ) interfere with a picture-naming response (e.g., "horse") more strongly than unrelated distractor sounds do (e.g., DRUMMING drum ). In the experiment reported here, we employed the psychological refractory period (PRP) approach to explore the locus of this effect. We combined a geometric form classification task (square vs. circle; Task 1) with the picture-sound interference task (Task 2). The stimulus onset asynchrony (SOA) between the tasks was systematically varied (0 vs. 500 ms). There were three central findings. First, the semantic interference effect from distractor sounds was replicated. Second, picture naming (in Task 2) was slower with the short than with the long task SOA. Third, both effects were additive-that is, the semantic interference effects were of similar magnitude at both task SOAs. This suggests that the interference arises during response selection or later stages, not during early perceptual processing. This finding corroborates the theory that semantic interference from distractor sounds reflects a competitive selection mechanism in word production.

  4. A Retrospective Examination of Feline Leukemia Subgroup Characterization: Viral Interference Assays to Deep Sequencing.

    PubMed

    Chiu, Elliott S; Hoover, Edward A; VandeWoude, Sue

    2018-01-10

    Feline leukemia virus (FeLV) was the first feline retrovirus discovered, and is associated with multiple fatal disease syndromes in cats, including lymphoma. The original research conducted on FeLV employed classical virological techniques. As methods have evolved to allow FeLV genetic characterization, investigators have continued to unravel the molecular pathology associated with this fascinating agent. In this review, we discuss how FeLV classification, transmission, and disease-inducing potential have been defined sequentially by viral interference assays, Sanger sequencing, PCR, and next-generation sequencing. In particular, we highlight the influences of endogenous FeLV and host genetics that represent FeLV research opportunities on the near horizon.

  5. A Machine Learning-based Method for Question Type Classification in Biomedical Question Answering.

    PubMed

    Sarrouti, Mourad; Ouatik El Alaoui, Said

    2017-05-18

    Biomedical question type classification is one of the important components of an automatic biomedical question answering system. The performance of the latter depends directly on the performance of its biomedical question type classification system, which consists of assigning a category to each question in order to determine the appropriate answer extraction algorithm. This study aims to automatically classify biomedical questions into one of the four categories: (1) yes/no, (2) factoid, (3) list, and (4) summary. In this paper, we propose a biomedical question type classification method based on machine learning approaches to automatically assign a category to a biomedical question. First, we extract features from biomedical questions using the proposed handcrafted lexico-syntactic patterns. Then, we feed these features for machine-learning algorithms. Finally, the class label is predicted using the trained classifiers. Experimental evaluations performed on large standard annotated datasets of biomedical questions, provided by the BioASQ challenge, demonstrated that our method exhibits significant improved performance when compared to four baseline systems. The proposed method achieves a roughly 10-point increase over the best baseline in terms of accuracy. Moreover, the obtained results show that using handcrafted lexico-syntactic patterns as features' provider of support vector machine (SVM) lead to the highest accuracy of 89.40 %. The proposed method can automatically classify BioASQ questions into one of the four categories: yes/no, factoid, list, and summary. Furthermore, the results demonstrated that our method produced the best classification performance compared to four baseline systems.

  6. A hierarchical classification method for finger knuckle print recognition

    NASA Astrophysics Data System (ADS)

    Kong, Tao; Yang, Gongping; Yang, Lu

    2014-12-01

    Finger knuckle print has recently been seen as an effective biometric technique. In this paper, we propose a hierarchical classification method for finger knuckle print recognition, which is rooted in traditional score-level fusion methods. In the proposed method, we firstly take Gabor feature as the basic feature for finger knuckle print recognition and then a new decision rule is defined based on the predefined threshold. Finally, the minor feature speeded-up robust feature is conducted for these users, who cannot be recognized by the basic feature. Extensive experiments are performed to evaluate the proposed method, and experimental results show that it can achieve a promising performance.

  7. Numerical techniques for high-throughput reflectance interference biosensing

    NASA Astrophysics Data System (ADS)

    Sevenler, Derin; Ünlü, M. Selim

    2016-06-01

    We have developed a robust and rapid computational method for processing the raw spectral data collected from thin film optical interference biosensors. We have applied this method to Interference Reflectance Imaging Sensor (IRIS) measurements and observed a 10,000 fold improvement in processing time, unlocking a variety of clinical and scientific applications. Interference biosensors have advantages over similar technologies in certain applications, for example highly multiplexed measurements of molecular kinetics. However, processing raw IRIS data into useful measurements has been prohibitively time consuming for high-throughput studies. Here we describe the implementation of a lookup table (LUT) technique that provides accurate results in far less time than naive methods. We also discuss an additional benefit that the LUT method can be used with a wider range of interference layer thickness and experimental configurations that are incompatible with methods that require fitting the spectral response.

  8. Testing Multivariate Adaptive Regression Splines (MARS) as a Method of Land Cover Classification of TERRA-ASTER Satellite Images.

    PubMed

    Quirós, Elia; Felicísimo, Angel M; Cuartero, Aurora

    2009-01-01

    This work proposes a new method to classify multi-spectral satellite images based on multivariate adaptive regression splines (MARS) and compares this classification system with the more common parallelepiped and maximum likelihood (ML) methods. We apply the classification methods to the land cover classification of a test zone located in southwestern Spain. The basis of the MARS method and its associated procedures are explained in detail, and the area under the ROC curve (AUC) is compared for the three methods. The results show that the MARS method provides better results than the parallelepiped method in all cases, and it provides better results than the maximum likelihood method in 13 cases out of 17. These results demonstrate that the MARS method can be used in isolation or in combination with other methods to improve the accuracy of soil cover classification. The improvement is statistically significant according to the Wilcoxon signed rank test.

  9. Land Covers Classification Based on Random Forest Method Using Features from Full-Waveform LIDAR Data

    NASA Astrophysics Data System (ADS)

    Ma, L.; Zhou, M.; Li, C.

    2017-09-01

    In this study, a Random Forest (RF) based land covers classification method is presented to predict the types of land covers in Miyun area. The returned full-waveforms which were acquired by a LiteMapper 5600 airborne LiDAR system were processed, including waveform filtering, waveform decomposition and features extraction. The commonly used features that were distance, intensity, Full Width at Half Maximum (FWHM), skewness and kurtosis were extracted. These waveform features were used as attributes of training data for generating the RF prediction model. The RF prediction model was applied to predict the types of land covers in Miyun area as trees, buildings, farmland and ground. The classification results of these four types of land covers were obtained according to the ground truth information acquired from CCD image data of the same region. The RF classification results were compared with that of SVM method and show better results. The RF classification accuracy reached 89.73% and the classification Kappa was 0.8631.

  10. [Spatial domain display for interference image dataset].

    PubMed

    Wang, Cai-Ling; Li, Yu-Shan; Liu, Xue-Bin; Hu, Bing-Liang; Jing, Juan-Juan; Wen, Jia

    2011-11-01

    The requirements of imaging interferometer visualization is imminent for the user of image interpretation and information extraction. However, the conventional researches on visualization only focus on the spectral image dataset in spectral domain. Hence, the quick show of interference spectral image dataset display is one of the nodes in interference image processing. The conventional visualization of interference dataset chooses classical spectral image dataset display method after Fourier transformation. In the present paper, the problem of quick view of interferometer imager in image domain is addressed and the algorithm is proposed which simplifies the matter. The Fourier transformation is an obstacle since its computation time is very large and the complexion would be even deteriorated with the size of dataset increasing. The algorithm proposed, named interference weighted envelopes, makes the dataset divorced from transformation. The authors choose three interference weighted envelopes respectively based on the Fourier transformation, features of interference data and human visual system. After comparing the proposed with the conventional methods, the results show the huge difference in display time.

  11. An inter-comparison of similarity-based methods for organisation and classification of groundwater hydrographs

    NASA Astrophysics Data System (ADS)

    Haaf, Ezra; Barthel, Roland

    2018-04-01

    Classification and similarity based methods, which have recently received major attention in the field of surface water hydrology, namely through the PUB (prediction in ungauged basins) initiative, have not yet been applied to groundwater systems. However, it can be hypothesised, that the principle of "similar systems responding similarly to similar forcing" applies in subsurface hydrology as well. One fundamental prerequisite to test this hypothesis and eventually to apply the principle to make "predictions for ungauged groundwater systems" is efficient methods to quantify the similarity of groundwater system responses, i.e. groundwater hydrographs. In this study, a large, spatially extensive, as well as geologically and geomorphologically diverse dataset from Southern Germany and Western Austria was used, to test and compare a set of 32 grouping methods, which have previously only been used individually in local-scale studies. The resulting groupings are compared to a heuristic visual classification, which serves as a baseline. A performance ranking of these classification methods is carried out and differences in homogeneity of grouping results were shown, whereby selected groups were related to hydrogeological indices and geological descriptors. This exploratory empirical study shows that the choice of grouping method has a large impact on the object distribution within groups, as well as on the homogeneity of patterns captured in groups. The study provides a comprehensive overview of a large number of grouping methods, which can guide researchers when attempting similarity-based groundwater hydrograph classification.

  12. FastICA peel-off for ECG interference removal from surface EMG.

    PubMed

    Chen, Maoqi; Zhang, Xu; Chen, Xiang; Zhu, Mingxing; Li, Guanglin; Zhou, Ping

    2016-06-13

    Multi-channel recording of surface electromyographyic (EMG) signals is very likely to be contaminated by electrocardiographic (ECG) interference, specifically when the surface electrode is placed on muscles close to the heart. A novel fast independent component analysis (FastICA) based peel-off method is presented to remove ECG interference contaminating multi-channel surface EMG signals. Although demonstrating spatial variability in waveform shape, the ECG interference in different channels shares the same firing instants. Utilizing the firing information estimated from FastICA, ECG interference can be separated from surface EMG by a "peel off" processing. The performance of the method was quantified with synthetic signals by combining a series of experimentally recorded "clean" surface EMG and "pure" ECG interference. It was demonstrated that the new method can remove ECG interference efficiently with little distortion to surface EMG amplitude and frequency. The proposed method was also validated using experimental surface EMG signals contaminated by ECG interference. The proposed FastICA peel-off method can be used as a new and practical solution to eliminating ECG interference from multichannel EMG recordings.

  13. A novel method to guide classification of para swimmers with limb deficiency.

    PubMed

    Hogarth, Luke; Payton, Carl; Van de Vliet, Peter; Connick, Mark; Burkett, Brendan

    2018-05-30

    The International Paralympic Committee has directed International Federations that govern Para sports to develop evidence-based classification systems. This study defined the impact of limb deficiency impairment on 100 m freestyle performance to guide an evidence-based classification system in Para Swimming, which will be implemented following the 2020 Tokyo Paralympic games. Impairment data and competitive race performances of 90 international swimmers with limb deficiency were collected. Ensemble partial least squares regression established the relationship between relative limb length measures and competitive 100 m freestyle performance. The model explained 80% of the variance in 100 m freestyle performance, and found hand length and forearm length to be the most important predictors of performance. Based on the results of this model, Para swimmers were clustered into four-, five-, six- and seven-class structures using nonparametric kernel density estimations. The validity of these classification structures, and effectiveness against the current classification system, were examined by establishing within-class variations in 100 m freestyle performance and differences between adjacent classes. The derived classification structures were found to be more effective than current classification based on these criteria. This study provides a novel method that can be used to improve the objectivity and transparency of decision-making in Para sport classification. Expert consensus from experienced coaches, Para swimmers, classifiers and sport science and medicine personnel will benefit the translation of these findings into a revised classification system that is accepted by the Para swimming community. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  14. a Data Field Method for Urban Remotely Sensed Imagery Classification Considering Spatial Correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Qin, K.; Zeng, C.; Zhang, E. B.; Yue, M. X.; Tong, X.

    2016-06-01

    Spatial correlation between pixels is important information for remotely sensed imagery classification. Data field method and spatial autocorrelation statistics have been utilized to describe and model spatial information of local pixels. The original data field method can represent the spatial interactions of neighbourhood pixels effectively. However, its focus on measuring the grey level change between the central pixel and the neighbourhood pixels results in exaggerating the contribution of the central pixel to the whole local window. Besides, Geary's C has also been proven to well characterise and qualify the spatial correlation between each pixel and its neighbourhood pixels. But the extracted object is badly delineated with the distracting salt-and-pepper effect of isolated misclassified pixels. To correct this defect, we introduce the data field method for filtering and noise limitation. Moreover, the original data field method is enhanced by considering each pixel in the window as the central pixel to compute statistical characteristics between it and its neighbourhood pixels. The last step employs a support vector machine (SVM) for the classification of multi-features (e.g. the spectral feature and spatial correlation feature). In order to validate the effectiveness of the developed method, experiments are conducted on different remotely sensed images containing multiple complex object classes inside. The results show that the developed method outperforms the traditional method in terms of classification accuracies.

  15. An NCME Instructional Module on Data Mining Methods for Classification and Regression

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2016-01-01

    Data mining methods for classification and regression are becoming increasingly popular in various scientific fields. However, these methods have not been explored much in educational measurement. This module first provides a review, which should be accessible to a wide audience in education measurement, of some of these methods. The module then…

  16. Land Cover Analysis by Using Pixel-Based and Object-Based Image Classification Method in Bogor

    NASA Astrophysics Data System (ADS)

    Amalisana, Birohmatin; Rokhmatullah; Hernina, Revi

    2017-12-01

    The advantage of image classification is to provide earth’s surface information like landcover and time-series changes. Nowadays, pixel-based image classification technique is commonly performed with variety of algorithm such as minimum distance, parallelepiped, maximum likelihood, mahalanobis distance. On the other hand, landcover classification can also be acquired by using object-based image classification technique. In addition, object-based classification uses image segmentation from parameter such as scale, form, colour, smoothness and compactness. This research is aimed to compare the result of landcover classification and its change detection between parallelepiped pixel-based and object-based classification method. Location of this research is Bogor with 20 years range of observation from 1996 until 2016. This region is famous as urban areas which continuously change due to its rapid development, so that time-series landcover information of this region will be interesting.

  17. Interference Mitigation Effects on Synthetic Aperture Radar Coherent Data Products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musgrove, Cameron

    2014-05-01

    For synthetic aperture radar image products interference can degrade the quality of the images while techniques to mitigate the interference also reduce the image quality. Usually the radar system designer will try to balance the amount of mitigation for the amount of interference to optimize the image quality. This may work well for many situations, but coherent data products derived from the image products are more sensitive than the human eye to distortions caused by interference and mitigation of interference. This dissertation examines the e ect that interference and mitigation of interference has upon coherent data products. An improvement tomore » the standard notch mitigation is introduced, called the equalization notch. Other methods are suggested to mitigation interference while improving the quality of coherent data products over existing methods.« less

  18. Comparison of Single and Multi-Scale Method for Leaf and Wood Points Classification from Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Wei, Hongqiang; Zhou, Guiyun; Zhou, Junjie

    2018-04-01

    The classification of leaf and wood points is an essential preprocessing step for extracting inventory measurements and canopy characterization of trees from the terrestrial laser scanning (TLS) data. The geometry-based approach is one of the widely used classification method. In the geometry-based method, it is common practice to extract salient features at one single scale before the features are used for classification. It remains unclear how different scale(s) used affect the classification accuracy and efficiency. To assess the scale effect on the classification accuracy and efficiency, we extracted the single-scale and multi-scale salient features from the point clouds of two oak trees of different sizes and conducted the classification on leaf and wood. Our experimental results show that the balanced accuracy of the multi-scale method is higher than the average balanced accuracy of the single-scale method by about 10 % for both trees. The average speed-up ratio of single scale classifiers over multi-scale classifier for each tree is higher than 30.

  19. A Biochar Classification System and Associated Test Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camps-Arbestain, Marta; Amonette, James E.; Singh, Balwant

    2015-02-18

    In this chapter, a biochar classification system related to its use as soil amendment is proposed. This document builds upon previous work and constrains its scope to materials with properties that satisfy the criteria for biochar as defined by either the International Biochar Initiative (IBI) Biochar Standards or the European Biochar Community (EBC) Standards, and it is intended to minimise the need for testing in addition to those required according to the above-mentioned standards. The classification system envisions enabling stakeholders and commercial entities to (i) identify the most suitable biochar to fulfil the requirements for a particular soil and/or land-use,more » and (ii) distinguish the application of biochar for specific niches (e.g., soilless agriculture). It is based on the best current knowledge and the intention is to periodically review and update the document based on new data and knowledge that become available in the scientific literature. The main thrust of this classification system is based on the direct or indirect beneficial effects that biochar provides from its application to soil. We have classified the potential beneficial effects of biochar application to soils into five categories with their corresponding classes, where applicable: (i) carbon (C) storage value, (ii) fertiliser value, (iii) liming value, (iv) particle-size, and (v) use in soil-less agriculture. A summary of recommended test methods is provided at the end of the chapter.« less

  20. Data preprocessing methods of FT-NIR spectral data for the classification cooking oil

    NASA Astrophysics Data System (ADS)

    Ruah, Mas Ezatul Nadia Mohd; Rasaruddin, Nor Fazila; Fong, Sim Siong; Jaafar, Mohd Zuli

    2014-12-01

    This recent work describes the data pre-processing method of FT-NIR spectroscopy datasets of cooking oil and its quality parameters with chemometrics method. Pre-processing of near-infrared (NIR) spectral data has become an integral part of chemometrics modelling. Hence, this work is dedicated to investigate the utility and effectiveness of pre-processing algorithms namely row scaling, column scaling and single scaling process with Standard Normal Variate (SNV). The combinations of these scaling methods have impact on exploratory analysis and classification via Principle Component Analysis plot (PCA). The samples were divided into palm oil and non-palm cooking oil. The classification model was build using FT-NIR cooking oil spectra datasets in absorbance mode at the range of 4000cm-1-14000cm-1. Savitzky Golay derivative was applied before developing the classification model. Then, the data was separated into two sets which were training set and test set by using Duplex method. The number of each class was kept equal to 2/3 of the class that has the minimum number of sample. Then, the sample was employed t-statistic as variable selection method in order to select which variable is significant towards the classification models. The evaluation of data pre-processing were looking at value of modified silhouette width (mSW), PCA and also Percentage Correctly Classified (%CC). The results show that different data processing strategies resulting to substantial amount of model performances quality. The effects of several data pre-processing i.e. row scaling, column standardisation and single scaling process with Standard Normal Variate indicated by mSW and %CC. At two PCs model, all five classifier gave high %CC except Quadratic Distance Analysis.

  1. Classification methods to detect sleep apnea in adults based on respiratory and oximetry signals: a systematic review.

    PubMed

    Uddin, M B; Chow, C M; Su, S W

    2018-03-26

    Sleep apnea (SA), a common sleep disorder, can significantly decrease the quality of life, and is closely associated with major health risks such as cardiovascular disease, sudden death, depression, and hypertension. The normal diagnostic process of SA using polysomnography is costly and time consuming. In addition, the accuracy of different classification methods to detect SA varies with the use of different physiological signals. If an effective, reliable, and accurate classification method is developed, then the diagnosis of SA and its associated treatment will be time-efficient and economical. This study aims to systematically review the literature and present an overview of classification methods to detect SA using respiratory and oximetry signals and address the automated detection approach. Sixty-two included studies revealed the application of single and multiple signals (respiratory and oximetry) for the diagnosis of SA. Both airflow and oxygen saturation signals alone were effective in detecting SA in the case of binary decision-making, whereas multiple signals were good for multi-class detection. In addition, some machine learning methods were superior to the other classification methods for SA detection using respiratory and oximetry signals. To deal with the respiratory and oximetry signals, a good choice of classification method as well as the consideration of associated factors would result in high accuracy in the detection of SA. An accurate classification method should provide a high detection rate with an automated (independent of human action) analysis of respiratory and oximetry signals. Future high-quality automated studies using large samples of data from multiple patient groups or record batches are recommended.

  2. Novel high/low solubility classification methods for new molecular entities.

    PubMed

    Dave, Rutwij A; Morris, Marilyn E

    2016-09-10

    This research describes a rapid solubility classification approach that could be used in the discovery and development of new molecular entities. Compounds (N=635) were divided into two groups based on information available in the literature: high solubility (BDDCS/BCS 1/3) and low solubility (BDDCS/BCS 2/4). We established decision rules for determining solubility classes using measured log solubility in molar units (MLogSM) or measured solubility (MSol) in mg/ml units. ROC curve analysis was applied to determine statistically significant threshold values of MSol and MLogSM. Results indicated that NMEs with MLogSM>-3.05 or MSol>0.30mg/mL will have ≥85% probability of being highly soluble and new molecular entities with MLogSM≤-3.05 or MSol≤0.30mg/mL will have ≥85% probability of being poorly soluble. When comparing solubility classification using the threshold values of MLogSM or MSol with BDDCS, we were able to correctly classify 85% of compounds. We also evaluated solubility classification of an independent set of 108 orally administered drugs using MSol (0.3mg/mL) and our method correctly classified 81% and 95% of compounds into high and low solubility classes, respectively. The high/low solubility classification using MLogSM or MSol is novel and independent of traditionally used dose number criteria. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Application of statistical classification methods for predicting the acceptability of well-water quality

    NASA Astrophysics Data System (ADS)

    Cameron, Enrico; Pilla, Giorgio; Stella, Fabio A.

    2018-06-01

    The application of statistical classification methods is investigated—in comparison also to spatial interpolation methods—for predicting the acceptability of well-water quality in a situation where an effective quantitative model of the hydrogeological system under consideration cannot be developed. In the example area in northern Italy, in particular, the aquifer is locally affected by saline water and the concentration of chloride is the main indicator of both saltwater occurrence and groundwater quality. The goal is to predict if the chloride concentration in a water well will exceed the allowable concentration so that the water is unfit for the intended use. A statistical classification algorithm achieved the best predictive performances and the results of the study show that statistical classification methods provide further tools for dealing with groundwater quality problems concerning hydrogeological systems that are too difficult to describe analytically or to simulate effectively.

  4. Recursive heuristic classification

    NASA Technical Reports Server (NTRS)

    Wilkins, David C.

    1994-01-01

    The author will describe a new problem-solving approach called recursive heuristic classification, whereby a subproblem of heuristic classification is itself formulated and solved by heuristic classification. This allows the construction of more knowledge-intensive classification programs in a way that yields a clean organization. Further, standard knowledge acquisition and learning techniques for heuristic classification can be used to create, refine, and maintain the knowledge base associated with the recursively called classification expert system. The method of recursive heuristic classification was used in the Minerva blackboard shell for heuristic classification. Minerva recursively calls itself every problem-solving cycle to solve the important blackboard scheduler task, which involves assigning a desirability rating to alternative problem-solving actions. Knowing these ratings is critical to the use of an expert system as a component of a critiquing or apprenticeship tutoring system. One innovation of this research is a method called dynamic heuristic classification, which allows selection among dynamically generated classification categories instead of requiring them to be prenumerated.

  5. Method of Grassland Information Extraction Based on Multi-Level Segmentation and Cart Model

    NASA Astrophysics Data System (ADS)

    Qiao, Y.; Chen, T.; He, J.; Wen, Q.; Liu, F.; Wang, Z.

    2018-04-01

    It is difficult to extract grassland accurately by traditional classification methods, such as supervised method based on pixels or objects. This paper proposed a new method combing the multi-level segmentation with CART (classification and regression tree) model. The multi-level segmentation which combined the multi-resolution segmentation and the spectral difference segmentation could avoid the over and insufficient segmentation seen in the single segmentation mode. The CART model was established based on the spectral characteristics and texture feature which were excavated from training sample data. Xilinhaote City in Inner Mongolia Autonomous Region was chosen as the typical study area and the proposed method was verified by using visual interpretation results as approximate truth value. Meanwhile, the comparison with the nearest neighbor supervised classification method was obtained. The experimental results showed that the total precision of classification and the Kappa coefficient of the proposed method was 95 % and 0.9, respectively. However, the total precision of classification and the Kappa coefficient of the nearest neighbor supervised classification method was 80 % and 0.56, respectively. The result suggested that the accuracy of classification proposed in this paper was higher than the nearest neighbor supervised classification method. The experiment certificated that the proposed method was an effective extraction method of grassland information, which could enhance the boundary of grassland classification and avoid the restriction of grassland distribution scale. This method was also applicable to the extraction of grassland information in other regions with complicated spatial features, which could avoid the interference of woodland, arable land and water body effectively.

  6. Neural correlates of the number–size interference task in children

    PubMed Central

    Kaufmann, Liane; Koppelstaetter, Florian; Siedentopf, Christian; Haala, Ilka; Haberlandt, Edda; Zimmerhackl, Lothar-Bernd; Felber, Stefan; Ischebeck, Anja

    2010-01-01

    In this functional magnetic resonance imaging study, 17 children were asked to make numerical and physical magnitude classifications while ignoring the other stimulus dimension (number–size interference task). Digit pairs were either incongruent (3 8) or neutral (3 8). Generally, numerical magnitude interferes with font size (congruity effect). Moreover, relative to numerically adjacent digits far ones yield quicker responses (distance effect). Behaviourally, robust distance and congruity effects were observed in both tasks. imaging baselline contrasts revealed activations in frontal, parietal, occipital and cerebellar areas bilaterally. Different from results usually reported for adultssmaller distances activated frontal, but not (intra-)parietal areas in children. Congruity effects became significant only in physical comparisons. Thus, even with comparable behavioural performance, cerebral activation patterns may differ substantially between children and adults. PMID:16603917

  7. Fast-HPLC Fingerprinting to Discriminate Olive Oil from Other Edible Vegetable Oils by Multivariate Classification Methods.

    PubMed

    Jiménez-Carvelo, Ana M; González-Casado, Antonio; Pérez-Castaño, Estefanía; Cuadros-Rodríguez, Luis

    2017-03-01

    A new analytical method for the differentiation of olive oil from other vegetable oils using reversed-phase LC and applying chemometric techniques was developed. A 3 cm short column was used to obtain the chromatographic fingerprint of the methyl-transesterified fraction of each vegetable oil. The chromatographic analysis took only 4 min. The multivariate classification methods used were k-nearest neighbors, partial least-squares (PLS) discriminant analysis, one-class PLS, support vector machine classification, and soft independent modeling of class analogies. The discrimination of olive oil from other vegetable edible oils was evaluated by several classification quality metrics. Several strategies for the classification of the olive oil were used: one input-class, two input-class, and pseudo two input-class.

  8. A method for classification of multisource data using interval-valued probabilities and its application to HIRIS data

    NASA Technical Reports Server (NTRS)

    Kim, H.; Swain, P. H.

    1991-01-01

    A method of classifying multisource data in remote sensing is presented. The proposed method considers each data source as an information source providing a body of evidence, represents statistical evidence by interval-valued probabilities, and uses Dempster's rule to integrate information based on multiple data source. The method is applied to the problems of ground-cover classification of multispectral data combined with digital terrain data such as elevation, slope, and aspect. Then this method is applied to simulated 201-band High Resolution Imaging Spectrometer (HIRIS) data by dividing the dimensionally huge data source into smaller and more manageable pieces based on the global statistical correlation information. It produces higher classification accuracy than the Maximum Likelihood (ML) classification method when the Hughes phenomenon is apparent.

  9. Comparative analysis of image classification methods for automatic diagnosis of ophthalmic images

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Zhang, Kai; Liu, Xiyang; Long, Erping; Jiang, Jiewei; An, Yingying; Zhang, Jia; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni; Li, Wangting; Lin, Haotian

    2017-01-01

    There are many image classification methods, but it remains unclear which methods are most helpful for analyzing and intelligently identifying ophthalmic images. We select representative slit-lamp images which show the complexity of ocular images as research material to compare image classification algorithms for diagnosing ophthalmic diseases. To facilitate this study, some feature extraction algorithms and classifiers are combined to automatic diagnose pediatric cataract with same dataset and then their performance are compared using multiple criteria. This comparative study reveals the general characteristics of the existing methods for automatic identification of ophthalmic images and provides new insights into the strengths and shortcomings of these methods. The relevant methods (local binary pattern +SVMs, wavelet transformation +SVMs) which achieve an average accuracy of 87% and can be adopted in specific situations to aid doctors in preliminarily disease screening. Furthermore, some methods requiring fewer computational resources and less time could be applied in remote places or mobile devices to assist individuals in understanding the condition of their body. In addition, it would be helpful to accelerate the development of innovative approaches and to apply these methods to assist doctors in diagnosing ophthalmic disease.

  10. A chemometric method for correcting FTIR spectra of biomaterials for interference from water in KBr discs

    USDA-ARS?s Scientific Manuscript database

    FTIR analysis of solid biomaterials by the familiar KBr disc technique is very often frustrated by water interference in the important protein (amide I) and carbohydrate (hydroxyl) regions of their spectra. A method was therefore devised that overcomes the difficulty and measures FTIR spectra of so...

  11. A real-time method for autonomous passive acoustic detection-classification of humpback whales.

    PubMed

    Abbot, Ted A; Premus, Vincent E; Abbot, Philip A

    2010-05-01

    This paper describes a method for real-time, autonomous, joint detection-classification of humpback whale vocalizations. The approach adapts the spectrogram correlation method used by Mellinger and Clark [J. Acoust. Soc. Am. 107, 3518-3529 (2000)] for bowhead whale endnote detection to the humpback whale problem. The objective is the implementation of a system to determine the presence or absence of humpback whales with passive acoustic methods and to perform this classification with low false alarm rate in real time. Multiple correlation kernels are used due to the diversity of humpback song. The approach also takes advantage of the fact that humpbacks tend to vocalize repeatedly for extended periods of time, and identification is declared only when multiple song units are detected within a fixed time interval. Humpback whale vocalizations from Alaska, Hawaii, and Stellwagen Bank were used to train the algorithm. It was then tested on independent data obtained off Kaena Point, Hawaii in February and March of 2009. Results show that the algorithm successfully classified humpback whales autonomously in real time, with a measured probability of correct classification in excess of 74% and a measured probability of false alarm below 1%.

  12. Analysis and application of classification methods of complex carbonate reservoirs

    NASA Astrophysics Data System (ADS)

    Li, Xiongyan; Qin, Ruibao; Ping, Haitao; Wei, Dan; Liu, Xiaomei

    2018-06-01

    There are abundant carbonate reservoirs from the Cenozoic to Mesozoic era in the Middle East. Due to variation in sedimentary environment and diagenetic process of carbonate reservoirs, several porosity types coexist in carbonate reservoirs. As a result, because of the complex lithologies and pore types as well as the impact of microfractures, the pore structure is very complicated. Therefore, it is difficult to accurately calculate the reservoir parameters. In order to accurately evaluate carbonate reservoirs, based on the pore structure evaluation of carbonate reservoirs, the classification methods of carbonate reservoirs are analyzed based on capillary pressure curves and flow units. Based on the capillary pressure curves, although the carbonate reservoirs can be classified, the relationship between porosity and permeability after classification is not ideal. On the basis of the flow units, the high-precision functional relationship between porosity and permeability after classification can be established. Therefore, the carbonate reservoirs can be quantitatively evaluated based on the classification of flow units. In the dolomite reservoirs, the average absolute error of calculated permeability decreases from 15.13 to 7.44 mD. Similarly, the average absolute error of calculated permeability of limestone reservoirs is reduced from 20.33 to 7.37 mD. Only by accurately characterizing pore structures and classifying reservoir types, reservoir parameters could be calculated accurately. Therefore, characterizing pore structures and classifying reservoir types are very important to accurate evaluation of complex carbonate reservoirs in the Middle East.

  13. Misclassification Errors in Unsupervised Classification Methods. Comparison Based on the Simulation of Targeted Proteomics Data

    PubMed Central

    Andreev, Victor P; Gillespie, Brenda W; Helfand, Brian T; Merion, Robert M

    2016-01-01

    Unsupervised classification methods are gaining acceptance in omics studies of complex common diseases, which are often vaguely defined and are likely the collections of disease subtypes. Unsupervised classification based on the molecular signatures identified in omics studies have the potential to reflect molecular mechanisms of the subtypes of the disease and to lead to more targeted and successful interventions for the identified subtypes. Multiple classification algorithms exist but none is ideal for all types of data. Importantly, there are no established methods to estimate sample size in unsupervised classification (unlike power analysis in hypothesis testing). Therefore, we developed a simulation approach allowing comparison of misclassification errors and estimating the required sample size for a given effect size, number, and correlation matrix of the differentially abundant proteins in targeted proteomics studies. All the experiments were performed in silico. The simulated data imitated the expected one from the study of the plasma of patients with lower urinary tract dysfunction with the aptamer proteomics assay Somascan (SomaLogic Inc, Boulder, CO), which targeted 1129 proteins, including 330 involved in inflammation, 180 in stress response, 80 in aging, etc. Three popular clustering methods (hierarchical, k-means, and k-medoids) were compared. K-means clustering performed much better for the simulated data than the other two methods and enabled classification with misclassification error below 5% in the simulated cohort of 100 patients based on the molecular signatures of 40 differentially abundant proteins (effect size 1.5) from among the 1129-protein panel. PMID:27524871

  14. Evaluation of gene expression classification studies: factors associated with classification performance.

    PubMed

    Novianti, Putri W; Roes, Kit C B; Eijkemans, Marinus J C

    2014-01-01

    Classification methods used in microarray studies for gene expression are diverse in the way they deal with the underlying complexity of the data, as well as in the technique used to build the classification model. The MAQC II study on cancer classification problems has found that performance was affected by factors such as the classification algorithm, cross validation method, number of genes, and gene selection method. In this paper, we study the hypothesis that the disease under study significantly determines which method is optimal, and that additionally sample size, class imbalance, type of medical question (diagnostic, prognostic or treatment response), and microarray platform are potentially influential. A systematic literature review was used to extract the information from 48 published articles on non-cancer microarray classification studies. The impact of the various factors on the reported classification accuracy was analyzed through random-intercept logistic regression. The type of medical question and method of cross validation dominated the explained variation in accuracy among studies, followed by disease category and microarray platform. In total, 42% of the between study variation was explained by all the study specific and problem specific factors that we studied together.

  15. New feature extraction method for classification of agricultural products from x-ray images

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.; Lee, Ha-Woon; Keagy, Pamela M.; Schatzki, Thomas F.

    1999-01-01

    Classification of real-time x-ray images of randomly oriented touching pistachio nuts is discussed. The ultimate objective is the development of a system for automated non- invasive detection of defective product items on a conveyor belt. We discuss the extraction of new features that allow better discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discrimination between damaged and clean items. This feature extraction and classification stage is the new aspect of this paper; our new maximum representation and discriminating feature (MRDF) extraction method computes nonlinear features that are used as inputs to a new modified k nearest neighbor classifier. In this work the MRDF is applied to standard features. The MRDF is robust to various probability distributions of the input class and is shown to provide good classification and new ROC data.

  16. Accessing High Spatial Resolution in Astronomy Using Interference Methods

    NASA Astrophysics Data System (ADS)

    Carbonel, Cyril; Grasset, Sébastien; Maysonnave, Jean

    2018-04-01

    In astronomy, methods such as direct imaging or interferometry-based techniques (Michelson stellar interferometry for example) are used for observations. A particular advantage of interferometry is that it permits greater spatial resolution compared to direct imaging with a single telescope, which is limited by diffraction owing to the aperture of the instrument as shown by Rueckner et al. in a lecture demonstration. The focus of this paper, addressed to teachers and/or students in high schools and universities, is to easily underline both an application of interferometry in astronomy and stress its interest for resolution. To this end very simple optical experiments are presented to explain all the concepts. We show how an interference pattern resulting from the combined signals of two telescopes allows us to measure the distance between two stars with a resolution beyond the diffraction limit. Finally this work emphasizes the breathtaking resolution obtained in state-of-the-art instruments such as the VLTi (Very Large Telescope interferometer).

  17. Comparative study of SVM methods combined with voxel selection for object category classification on fMRI data.

    PubMed

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-02-16

    Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal components were kept as features was a better choice.

  18. Olfactory identification and Stroop interference converge in schizophrenia.

    PubMed Central

    Purdon, S E

    1998-01-01

    OBJECTIVE: To test the discriminant validity of a model predicting a dissociation between measures of right and left frontal lobe function in people with schizophrenia. PARTICIPANTS: Twenty-one clinically stable outpatients with schizophrenia. INTERVENTIONS: Patients were administered the University of Pennsylvania Smell Identification Test (UPSIT), the Stroop Color-Word Test (Stroop), and the Positive and Negative Syndrome Scale (PANSS). OUTCOME MEASURES: Scores on these tests and relation among scores. RESULTS: There was a convergence of UPSII and Stroop interference scores consistent with a common cerebral basis for limitations in olfactory identification and inhibition of distraction. There was also a divergence of UPSIT and Stroop reading scores suggesting that the olfactory identification limitation is distinct from a general limitation of attention or a dysfunction of the left dorsolateral prefrontal cortex. Most notable was the 81% classification convergence between the UPSIT and Stroop incongruous colour naming scores compared with the near-random 57% classification convergence of the UPSIT and Stroop reading scores. CONCLUSIONS: These data are consistent with a right orbitofrontal dysfunction in a subgroup of patients with schizophrenia, although the involvement of mesial temporal structures in both tasks must be ruled out with further study. A multifactorial model depicting contributions from diverse cerebral structures is required to describe the pathophysiology of schizophrenia. Valid behavioural methods for classifying suspected subgroups of patients with particular cerebral dysfunction would be of value in the construction of this model. PMID:9595890

  19. Application of different classification methods for litho-fluid facies prediction: a case study from the offshore Nile Delta

    NASA Astrophysics Data System (ADS)

    Aleardi, Mattia; Ciabarri, Fabio

    2017-10-01

    In this work we test four classification methods for litho-fluid facies identification in a clastic reservoir located in the offshore Nile Delta. The ultimate goal of this study is to find an optimal classification method for the area under examination. The geologic context of the investigated area allows us to consider three different facies in the classification: shales, brine sands and gas sands. The depth at which the reservoir zone is located (2300-2700 m) produces a significant overlap of the P- and S-wave impedances of brine sands and gas sands that makes discrimination between these two litho-fluid classes particularly problematic. The classification is performed on the feature space defined by the elastic properties that are derived from recorded reflection seismic data by means of amplitude versus angle Bayesian inversion. As classification methods we test both deterministic and probabilistic approaches: the quadratic discriminant analysis and the neural network methods belong to the first group, whereas the standard Bayesian approach and the Bayesian approach that includes a 1D Markov chain a priori model to constrain the vertical continuity of litho-fluid facies belong to the second group. The ability of each method to discriminate the different facies is evaluated both on synthetic seismic data (computed on the basis of available borehole information) and on field seismic data. The outcomes of each classification method are compared with the known facies profile derived from well log data and the goodness of the results is quantitatively evaluated using the so-called confusion matrix. The results show that all methods return vertical facies profiles in which the main reservoir zone is correctly identified. However, the consideration of as much prior information as possible in the classification process is the winning choice for deriving a reliable and physically plausible predicted facies profile.

  20. Development of a classification method for a crack on a pavement surface images using machine learning

    NASA Astrophysics Data System (ADS)

    Hizukuri, Akiyoshi; Nagata, Takeshi

    2017-03-01

    The purpose of this study is to develop a classification method for a crack on a pavement surface image using machine learning to reduce a maintenance fee. Our database consists of 3500 pavement surface images. This includes 800 crack and 2700 normal pavement surface images. The pavement surface images first are decomposed into several sub-images using a discrete wavelet transform (DWT) decomposition. We then calculate the wavelet sub-band histogram from each several sub-images at each level. The support vector machine (SVM) with computed wavelet sub-band histogram is employed for distinguishing between a crack and normal pavement surface images. The accuracies of the proposed classification method are 85.3% for crack and 84.4% for normal pavement images. The proposed classification method achieved high performance. Therefore, the proposed method would be useful in maintenance inspection.

  1. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data.

    PubMed

    Becker, Natalia; Toedt, Grischa; Lichter, Peter; Benner, Axel

    2011-05-09

    Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net.We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone.Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error.Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning parameters.The penalized SVM

  2. Elastic SCAD as a novel penalization method for SVM classification tasks in high-dimensional data

    PubMed Central

    2011-01-01

    Background Classification and variable selection play an important role in knowledge discovery in high-dimensional data. Although Support Vector Machine (SVM) algorithms are among the most powerful classification and prediction methods with a wide range of scientific applications, the SVM does not include automatic feature selection and therefore a number of feature selection procedures have been developed. Regularisation approaches extend SVM to a feature selection method in a flexible way using penalty functions like LASSO, SCAD and Elastic Net. We propose a novel penalty function for SVM classification tasks, Elastic SCAD, a combination of SCAD and ridge penalties which overcomes the limitations of each penalty alone. Since SVM models are extremely sensitive to the choice of tuning parameters, we adopted an interval search algorithm, which in comparison to a fixed grid search finds rapidly and more precisely a global optimal solution. Results Feature selection methods with combined penalties (Elastic Net and Elastic SCAD SVMs) are more robust to a change of the model complexity than methods using single penalties. Our simulation study showed that Elastic SCAD SVM outperformed LASSO (L1) and SCAD SVMs. Moreover, Elastic SCAD SVM provided sparser classifiers in terms of median number of features selected than Elastic Net SVM and often better predicted than Elastic Net in terms of misclassification error. Finally, we applied the penalization methods described above on four publicly available breast cancer data sets. Elastic SCAD SVM was the only method providing robust classifiers in sparse and non-sparse situations. Conclusions The proposed Elastic SCAD SVM algorithm provides the advantages of the SCAD penalty and at the same time avoids sparsity limitations for non-sparse data. We were first to demonstrate that the integration of the interval search algorithm and penalized SVM classification techniques provides fast solutions on the optimization of tuning

  3. Correlates of birth asphyxia using two Apgar score classification methods.

    PubMed

    Olusanya, Bolajoko O; Solanke, Olumuyiwa A

    2010-01-01

    Birth asphyxia is commonly indexed by low five-minute Apgar scores especially in resource-constrained settings but the impact of different classification thresholds on the associated risk factors has not been reported. To determine the potential impact of two classification methods of five-minute Apgar score as predictor for birth asphyxia. A cross-sectional study of preterm and term survivors in Lagos, Nigeria in which antepartum and intrapartum factors associated with "very low" (0-3) or "intermediate" (4-6) five-minute Apgar scores were compared with correlates of low five-minute Apgar scores (0-6) based on multinomial and binary logistic regression analyses. Of the 4281 mother-infant pairs enrolled, 3377 (78.9%) were full-term and 904 (21.1%) preterm. Apgar scores were very low in 99 (2.3%) and intermediate in 1115 (26.0%). Antenatal care, premature rupture of membranes (PROM), hypertensive disorders and mode of delivery were associated with very low and intermediate Apgar scores in all infants. Additionally, parity, antepartum haemorrhage and prolonged/obstructed labour (PROL) were predictive in term infants compared with maternal occupation and intrauterine growth restriction (IUGR) in preterm infants. Conversely, PROM in term infants and maternal occupation in preterm infants were not significantly associated with the composite low Apgar scores (0-6) while IUGR was associated with term infants. Predictors of birth asphyxia in preterm and term infants are likely to be affected by the Apgar score classification method adopted and the clinical implications for optimal resuscitation practices merit attention in resource-constrained settings.

  4. Automated artery-venous classification of retinal blood vessels based on structural mapping method

    NASA Astrophysics Data System (ADS)

    Joshi, Vinayak S.; Garvin, Mona K.; Reinhardt, Joseph M.; Abramoff, Michael D.

    2012-03-01

    Retinal blood vessels show morphologic modifications in response to various retinopathies. However, the specific responses exhibited by arteries and veins may provide a precise diagnostic information, i.e., a diabetic retinopathy may be detected more accurately with the venous dilatation instead of average vessel dilatation. In order to analyze the vessel type specific morphologic modifications, the classification of a vessel network into arteries and veins is required. We previously described a method for identification and separation of retinal vessel trees; i.e. structural mapping. Therefore, we propose the artery-venous classification based on structural mapping and identification of color properties prominent to the vessel types. The mean and standard deviation of each of green channel intensity and hue channel intensity are analyzed in a region of interest around each centerline pixel of a vessel. Using the vector of color properties extracted from each centerline pixel, it is classified into one of the two clusters (artery and vein), obtained by the fuzzy-C-means clustering. According to the proportion of clustered centerline pixels in a particular vessel, and utilizing the artery-venous crossing property of retinal vessels, each vessel is assigned a label of an artery or a vein. The classification results are compared with the manually annotated ground truth (gold standard). We applied the proposed method to a dataset of 15 retinal color fundus images resulting in an accuracy of 88.28% correctly classified vessel pixels. The automated classification results match well with the gold standard suggesting its potential in artery-venous classification and the respective morphology analysis.

  5. An Evaluation of Deficits in Semantic Cuing, Proactive and Retroactive Interference as Early Features of Alzheimer’s disease

    PubMed Central

    Crocco, Elizabeth; Curiel, Rosie E.; Acevedo, Amarilis; Czaja, Sara J.; Loewenstein, David A.

    2015-01-01

    OBJECTIVE To determine the degree to which susceptibility to different types of semantic interference may reflect the earliest manifestations of early Alzheimer disease (AD) beyond the effects of global memory impairment. METHODS Normal elderly (NE) subjects (n= 47), subjects with amnestic mild cognitive impairment (aMCI: n=34) and 40 subjects with probable AD were evaluated using a unique cued recall paradigm that allowed for an evaluation of both proactive and retroactive interference effects while controlling for global memory impairment (LASSI-L procedure). RESULTS Controlling for overall memory impairment, aMCI subjects had much greater proactive and retroactive interference effects than NE subjects. LASSI-L indices of learning using cued recall evidenced high levels of sensitivity and specificity with an overall correct classification rate of 90%. These provided better discrimination than traditional neuropsychological measures of memory function. CONCLUSION The LASSI-L paradigm is unique and unlike other assessments of memory in that items presented for cued recall are explicitly presented, and semantic interference and cuing effects can be assessed while controlling for initial level of memory impairment. This represents a powerful procedure allowing the participant to serve as his or her own control. The high levels of discrimination between subjects with aMCI and normal cognition that exceeded traditional neuropsychological measures makes the LASSI-L worthy of further research in the detection of early AD. PMID:23768680

  6. Methods and potentials for using satellite image classification in school lessons

    NASA Astrophysics Data System (ADS)

    Voss, Kerstin; Goetzke, Roland; Hodam, Henryk

    2011-11-01

    The FIS project - FIS stands for Fernerkundung in Schulen (Remote Sensing in Schools) - aims at a better integration of the topic "satellite remote sensing" in school lessons. According to this, the overarching objective is to teach pupils basic knowledge and fields of application of remote sensing. Despite the growing significance of digital geomedia, the topic "remote sensing" is not broadly supported in schools. Often, the topic is reduced to a short reflection on satellite images and used only for additional illustration of issues relevant for the curriculum. Without addressing the issue of image data, this can hardly contribute to the improvement of the pupils' methodical competences. Because remote sensing covers more than simple, visual interpretation of satellite images, it is necessary to integrate remote sensing methods like preprocessing, classification and change detection. Dealing with these topics often fails because of confusing background information and the lack of easy-to-use software. Based on these insights, the FIS project created different simple analysis tools for remote sensing in school lessons, which enable teachers as well as pupils to be introduced to the topic in a structured way. This functionality as well as the fields of application of these analysis tools will be presented in detail with the help of three different classification tools for satellite image classification.

  7. Interference lithography for optical devices and coatings

    NASA Astrophysics Data System (ADS)

    Juhl, Abigail Therese

    Interference lithography can create large-area, defect-free nanostructures with unique optical properties. In this thesis, interference lithography will be utilized to create photonic crystals for functional devices or coatings. For instance, typical lithographic processing techniques were used to create 1, 2 and 3 dimensional photonic crystals in SU8 photoresist. These structures were in-filled with birefringent liquid crystal to make active devices, and the orientation of the liquid crystal directors within the SU8 matrix was studied. Most of this thesis will be focused on utilizing polymerization induced phase separation as a single-step method for fabrication by interference lithography. For example, layered polymer/nanoparticle composites have been created through the one-step two-beam interference lithographic exposure of a dispersion of 25 and 50 nm silica particles within a photopolymerizable mixture at a wavelength of 532 nm. In the areas of constructive interference, the monomer begins to polymerize via a free-radical process and concurrently the nanoparticles move into the regions of destructive interference. The holographic exposure of the particles within the monomer resin offers a single-step method to anisotropically structure the nanoconstituents within a composite. A one-step holographic exposure was also used to fabricate self-healing coatings that use water from the environment to catalyze polymerization. Polymerization induced phase separation was used to sequester an isocyanate monomer within an acrylate matrix. Due to the periodic modulation of the index of refraction between the monomer and polymer, the coating can reflect a desired wavelength, allowing for tunable coloration. When the coating is scratched, polymerization of the liquid isocyanate is catalyzed by moisture in air; if the indices of the two polymers are matched, the coatings turn transparent after healing. Interference lithography offers a method of creating multifunctional self

  8. An improved arteriovenous classification method for the early diagnostics of various diseases in retinal image.

    PubMed

    Xu, Xiayu; Ding, Wenxiang; Abràmoff, Michael D; Cao, Ruofan

    2017-04-01

    Retinal artery and vein classification is an important task for the automatic computer-aided diagnosis of various eye diseases and systemic diseases. This paper presents an improved supervised artery and vein classification method in retinal image. Intra-image regularization and inter-subject normalization is applied to reduce the differences in feature space. Novel features, including first-order and second-order texture features, are utilized to capture the discriminating characteristics of arteries and veins. The proposed method was tested on the DRIVE dataset and achieved an overall accuracy of 0.923. This retinal artery and vein classification algorithm serves as a potentially important tool for the early diagnosis of various diseases, including diabetic retinopathy and cardiovascular diseases. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. The Acquisition and Transfer of Botanical Classification by Elementary Science Methods Students.

    ERIC Educational Resources Information Center

    Knapp, Clifford Edward

    Investigated were two questions related to the acquisition and transfer of botanical classification skill by elementary science methods students. Data were collected from a sample of 89 students enrolled in methods courses. Sixty-two students served as the experimental sample, and 27 served as the control for the transfer portion of the research.…

  10. Ensemble methods with simple features for document zone classification

    NASA Astrophysics Data System (ADS)

    Obafemi-Ajayi, Tayo; Agam, Gady; Xie, Bingqing

    2012-01-01

    Document layout analysis is of fundamental importance for document image understanding and information retrieval. It requires the identification of blocks extracted from a document image via features extraction and block classification. In this paper, we focus on the classification of the extracted blocks into five classes: text (machine printed), handwriting, graphics, images, and noise. We propose a new set of features for efficient classifications of these blocks. We present a comparative evaluation of three ensemble based classification algorithms (boosting, bagging, and combined model trees) in addition to other known learning algorithms. Experimental results are demonstrated for a set of 36503 zones extracted from 416 document images which were randomly selected from the tobacco legacy document collection. The results obtained verify the robustness and effectiveness of the proposed set of features in comparison to the commonly used Ocropus recognition features. When used in conjunction with the Ocropus feature set, we further improve the performance of the block classification system to obtain a classification accuracy of 99.21%.

  11. A new feature extraction method for signal classification applied to cord dorsum potentials detection

    PubMed Central

    Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.

    2012-01-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924

  12. A new feature extraction method for signal classification applied to cord dorsum potential detection.

    PubMed

    Vidaurre, D; Rodríguez, E E; Bielza, C; Larrañaga, P; Rudomin, P

    2012-10-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.

  13. A Note on Comparing Examinee Classification Methods for Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Huebner, Alan; Wang, Chun

    2011-01-01

    Cognitive diagnosis models have received much attention in the recent psychometric literature because of their potential to provide examinees with information regarding multiple fine-grained discretely defined skills, or attributes. This article discusses the issue of methods of examinee classification for cognitive diagnosis models, which are…

  14. Statistical Calibration and Validation of a Homogeneous Ventilated Wall-Interference Correction Method for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.

    2005-01-01

    Wind tunnel experiments will continue to be a primary source of validation data for many types of mathematical and computational models in the aerospace industry. The increased emphasis on accuracy of data acquired from these facilities requires understanding of the uncertainty of not only the measurement data but also any correction applied to the data. One of the largest and most critical corrections made to these data is due to wall interference. In an effort to understand the accuracy and suitability of these corrections, a statistical validation process for wall interference correction methods has been developed. This process is based on the use of independent cases which, after correction, are expected to produce the same result. Comparison of these independent cases with respect to the uncertainty in the correction process establishes a domain of applicability based on the capability of the method to provide reasonable corrections with respect to customer accuracy requirements. The statistical validation method was applied to the version of the Transonic Wall Interference Correction System (TWICS) recently implemented in the National Transonic Facility at NASA Langley Research Center. The TWICS code generates corrections for solid and slotted wall interference in the model pitch plane based on boundary pressure measurements. Before validation could be performed on this method, it was necessary to calibrate the ventilated wall boundary condition parameters. Discrimination comparisons are used to determine the most representative of three linear boundary condition models which have historically been used to represent longitudinally slotted test section walls. Of the three linear boundary condition models implemented for ventilated walls, the general slotted wall model was the most representative of the data. The TWICS code using the calibrated general slotted wall model was found to be valid to within the process uncertainty for test section Mach numbers less

  15. Intentional forgetting reduces color-naming interference: evidence from item-method directed forgetting.

    PubMed

    Lee, Yuh-Shiow; Lee, Huang-Mou; Fawcett, Jonathan M

    2013-01-01

    In an item-method-directed forgetting task, Chinese words were presented individually, each followed by an instruction to remember or forget. Colored probe items were presented following each memory instruction requiring a speeded color-naming response. Half of the probe items were novel and unrelated to the preceding study item, whereas the remaining half of the probe items were a repetition of the preceding study item. Repeated probe items were either identical to the preceding study item (E1, E2), a phonetic reproduction of the preceding study item (E3), or perceptually matched to the preceding study item (E4). Color-naming interference was calculated by subtracting color-naming reaction times made in response to a string of meaningless symbols from that of the novel and repeated conditions. Across all experiments, participants recalled more to-be-remembered (TBR) than to-be-forgotten (TBF) study words. More importantly, Experiments 1 and 2 found that color-naming interference was reduced for repeated TBF words relative to repeated TBR words. Experiments 3 and 4 further found that this effect occurred at the perceptual rather than semantic level. These findings suggest that participants may bias processing resources away from the perceptual representation of to-be-forgotten information.

  16. A Hybrid Classification System for Heart Disease Diagnosis Based on the RFRS Method.

    PubMed

    Liu, Xiao; Wang, Xiaoli; Su, Qiang; Zhang, Mo; Zhu, Yanhong; Wang, Qiugen; Wang, Qian

    2017-01-01

    Heart disease is one of the most common diseases in the world. The objective of this study is to aid the diagnosis of heart disease using a hybrid classification system based on the ReliefF and Rough Set (RFRS) method. The proposed system contains two subsystems: the RFRS feature selection system and a classification system with an ensemble classifier. The first system includes three stages: (i) data discretization, (ii) feature extraction using the ReliefF algorithm, and (iii) feature reduction using the heuristic Rough Set reduction algorithm that we developed. In the second system, an ensemble classifier is proposed based on the C4.5 classifier. The Statlog (Heart) dataset, obtained from the UCI database, was used for experiments. A maximum classification accuracy of 92.59% was achieved according to a jackknife cross-validation scheme. The results demonstrate that the performance of the proposed system is superior to the performances of previously reported classification techniques.

  17. Fault classification method for the driving safety of electrified vehicles

    NASA Astrophysics Data System (ADS)

    Wanner, Daniel; Drugge, Lars; Stensson Trigell, Annika

    2014-05-01

    A fault classification method is proposed which has been applied to an electric vehicle. Potential faults in the different subsystems that can affect the vehicle directional stability were collected in a failure mode and effect analysis. Similar driveline faults were grouped together if they resembled each other with respect to their influence on the vehicle dynamic behaviour. The faults were physically modelled in a simulation environment before they were induced in a detailed vehicle model under normal driving conditions. A special focus was placed on faults in the driveline of electric vehicles employing in-wheel motors of the permanent magnet type. Several failures caused by mechanical and other faults were analysed as well. The fault classification method consists of a controllability ranking developed according to the functional safety standard ISO 26262. The controllability of a fault was determined with three parameters covering the influence of the longitudinal, lateral and yaw motion of the vehicle. The simulation results were analysed and the faults were classified according to their controllability using the proposed method. It was shown that the controllability decreased specifically with increasing lateral acceleration and increasing speed. The results for the electric driveline faults show that this trend cannot be generalised for all the faults, as the controllability deteriorated for some faults during manoeuvres with low lateral acceleration and low speed. The proposed method is generic and can be applied to various other types of road vehicles and faults.

  18. Wall Interference in Two-Dimensional Wind Tunnels

    NASA Technical Reports Server (NTRS)

    Kemp, William B., Jr.

    1986-01-01

    Viscosity and tunnel-wall constraints introduced via boundary conditions. TWINTN4 computer program developed to implement method of posttest assessment of wall interference in two-dimensional wind tunnels. Offers two methods for combining sidewall boundary-layer effects with upper and lower wall interference. In sequential procedure, Sewall method used to define flow free of sidewall effects, then assessed for upper and lower wall effects. In unified procedure, wind-tunnel flow equations altered to incorporate effects from all four walls at once. Program written in FORTRAN IV for batch execution.

  19. Application of random forests methods to diabetic retinopathy classification analyses.

    PubMed

    Casanova, Ramon; Saldana, Santiago; Chew, Emily Y; Danis, Ronald P; Greven, Craig M; Ambrosius, Walter T

    2014-01-01

    Diabetic retinopathy (DR) is one of the leading causes of blindness in the United States and world-wide. DR is a silent disease that may go unnoticed until it is too late for effective treatment. Therefore, early detection could improve the chances of therapeutic interventions that would alleviate its effects. Graded fundus photography and systemic data from 3443 ACCORD-Eye Study participants were used to estimate Random Forest (RF) and logistic regression classifiers. We studied the impact of sample size on classifier performance and the possibility of using RF generated class conditional probabilities as metrics describing DR risk. RF measures of variable importance are used to detect factors that affect classification performance. Both types of data were informative when discriminating participants with or without DR. RF based models produced much higher classification accuracy than those based on logistic regression. Combining both types of data did not increase accuracy but did increase statistical discrimination of healthy participants who subsequently did or did not have DR events during four years of follow-up. RF variable importance criteria revealed that microaneurysms counts in both eyes seemed to play the most important role in discrimination among the graded fundus variables, while the number of medicines and diabetes duration were the most relevant among the systemic variables. We have introduced RF methods to DR classification analyses based on fundus photography data. In addition, we propose an approach to DR risk assessment based on metrics derived from graded fundus photography and systemic data. Our results suggest that RF methods could be a valuable tool to diagnose DR diagnosis and evaluate its progression.

  20. Application of Random Forests Methods to Diabetic Retinopathy Classification Analyses

    PubMed Central

    Casanova, Ramon; Saldana, Santiago; Chew, Emily Y.; Danis, Ronald P.; Greven, Craig M.; Ambrosius, Walter T.

    2014-01-01

    Background Diabetic retinopathy (DR) is one of the leading causes of blindness in the United States and world-wide. DR is a silent disease that may go unnoticed until it is too late for effective treatment. Therefore, early detection could improve the chances of therapeutic interventions that would alleviate its effects. Methodology Graded fundus photography and systemic data from 3443 ACCORD-Eye Study participants were used to estimate Random Forest (RF) and logistic regression classifiers. We studied the impact of sample size on classifier performance and the possibility of using RF generated class conditional probabilities as metrics describing DR risk. RF measures of variable importance are used to detect factors that affect classification performance. Principal Findings Both types of data were informative when discriminating participants with or without DR. RF based models produced much higher classification accuracy than those based on logistic regression. Combining both types of data did not increase accuracy but did increase statistical discrimination of healthy participants who subsequently did or did not have DR events during four years of follow-up. RF variable importance criteria revealed that microaneurysms counts in both eyes seemed to play the most important role in discrimination among the graded fundus variables, while the number of medicines and diabetes duration were the most relevant among the systemic variables. Conclusions and Significance We have introduced RF methods to DR classification analyses based on fundus photography data. In addition, we propose an approach to DR risk assessment based on metrics derived from graded fundus photography and systemic data. Our results suggest that RF methods could be a valuable tool to diagnose DR diagnosis and evaluate its progression. PMID:24940623

  1. Cognitive-Behavioral Classifications of Chronic Pain in Patients with Multiple Sclerosis

    ERIC Educational Resources Information Center

    Khan, Fary; Pallant, Julie F.; Amatya, Bhasker; Young, Kevin; Gibson, Steven

    2011-01-01

    The aim of this study was to replicate, in patients with multiple sclerosis (MS), the three-cluster cognitive-behavioral classification proposed by Turk and Rudy. Sixty-two patients attending a tertiary MS rehabilitation center completed the Pain Impact Rating questionnaire measuring activity interference, pain intensity, social support, and…

  2. Protein classification based on text document classification techniques.

    PubMed

    Cheng, Betty Yee Man; Carbonell, Jaime G; Klein-Seetharaman, Judith

    2005-03-01

    The need for accurate, automated protein classification methods continues to increase as advances in biotechnology uncover new proteins. G-protein coupled receptors (GPCRs) are a particularly difficult superfamily of proteins to classify due to extreme diversity among its members. Previous comparisons of BLAST, k-nearest neighbor (k-NN), hidden markov model (HMM) and support vector machine (SVM) using alignment-based features have suggested that classifiers at the complexity of SVM are needed to attain high accuracy. Here, analogous to document classification, we applied Decision Tree and Naive Bayes classifiers with chi-square feature selection on counts of n-grams (i.e. short peptide sequences of length n) to this classification task. Using the GPCR dataset and evaluation protocol from the previous study, the Naive Bayes classifier attained an accuracy of 93.0 and 92.4% in level I and level II subfamily classification respectively, while SVM has a reported accuracy of 88.4 and 86.3%. This is a 39.7 and 44.5% reduction in residual error for level I and level II subfamily classification, respectively. The Decision Tree, while inferior to SVM, outperforms HMM in both level I and level II subfamily classification. For those GPCR families whose profiles are stored in the Protein FAMilies database of alignments and HMMs (PFAM), our method performs comparably to a search against those profiles. Finally, our method can be generalized to other protein families by applying it to the superfamily of nuclear receptors with 94.5, 97.8 and 93.6% accuracy in family, level I and level II subfamily classification respectively. Copyright 2005 Wiley-Liss, Inc.

  3. Comparative Study of SVM Methods Combined with Voxel Selection for Object Category Classification on fMRI Data

    PubMed Central

    Song, Sutao; Zhan, Zhichao; Long, Zhiying; Zhang, Jiacai; Yao, Li

    2011-01-01

    Background Support vector machine (SVM) has been widely used as accurate and reliable method to decipher brain patterns from functional MRI (fMRI) data. Previous studies have not found a clear benefit for non-linear (polynomial kernel) SVM versus linear one. Here, a more effective non-linear SVM using radial basis function (RBF) kernel is compared with linear SVM. Different from traditional studies which focused either merely on the evaluation of different types of SVM or the voxel selection methods, we aimed to investigate the overall performance of linear and RBF SVM for fMRI classification together with voxel selection schemes on classification accuracy and time-consuming. Methodology/Principal Findings Six different voxel selection methods were employed to decide which voxels of fMRI data would be included in SVM classifiers with linear and RBF kernels in classifying 4-category objects. Then the overall performances of voxel selection and classification methods were compared. Results showed that: (1) Voxel selection had an important impact on the classification accuracy of the classifiers: in a relative low dimensional feature space, RBF SVM outperformed linear SVM significantly; in a relative high dimensional space, linear SVM performed better than its counterpart; (2) Considering the classification accuracy and time-consuming holistically, linear SVM with relative more voxels as features and RBF SVM with small set of voxels (after PCA) could achieve the better accuracy and cost shorter time. Conclusions/Significance The present work provides the first empirical result of linear and RBF SVM in classification of fMRI data, combined with voxel selection methods. Based on the findings, if only classification accuracy was concerned, RBF SVM with appropriate small voxels and linear SVM with relative more voxels were two suggested solutions; if users concerned more about the computational time, RBF SVM with relative small set of voxels when part of the principal

  4. An Object-Oriented Classification Method on High Resolution Satellite Data

    DTIC Science & Technology

    2004-11-01

    25th ACRS 2004 Chiang Mai , Thailand 347 Data Processing B-4.6 AN OBJECT-ORIENTED CLASSIFICATION METHOD ON...unlimited 13. SUPPLEMENTARY NOTES Proceedings of the 25th Asian Conference on Remote Sensing, Held in Chiang Mai , Thailand on 22-26 November 2004...panchromatic (left) and multispectral (right) 25th ACRS 2004 Chiang Mai , Thailand 349 Data Processing B-4.6 First of all, the

  5. Young's double-slit interference with two-color biphotons.

    PubMed

    Zhang, De-Jian; Wu, Shuang; Li, Hong-Guo; Wang, Hai-Bo; Xiong, Jun; Wang, Kaige

    2017-12-12

    In classical optics, Young's double-slit experiment with colored coherent light gives rise to individual interference fringes for each light frequency, referring to single-photon interference. However, two-photon double-slit interference has been widely studied only for wavelength-degenerate biphoton, known as subwavelength quantum lithography. In this work, we report double-slit interference experiments with two-color biphoton. Different from the degenerate case, the experimental results depend on the measurement methods. From a two-axis coincidence measurement pattern we can extract complete interference information about two colors. The conceptual model provides an intuitional picture of the in-phase and out-of-phase photon correlations and a complete quantum understanding about the which-path information of two colored photons.

  6. Comparison of different classification methods for analyzing electronic nose data to characterize sesame oils and blends.

    PubMed

    Shao, Xiaolong; Li, Hui; Wang, Nan; Zhang, Qiang

    2015-10-21

    An electronic nose (e-nose) was used to characterize sesame oils processed by three different methods (hot-pressed, cold-pressed, and refined), as well as blends of the sesame oils and soybean oil. Seven classification and prediction methods, namely PCA, LDA, PLS, KNN, SVM, LASSO and RF, were used to analyze the e-nose data. The classification accuracy and MAUC were employed to evaluate the performance of these methods. The results indicated that sesame oils processed with different methods resulted in different sensor responses, with cold-pressed sesame oil producing the strongest sensor signals, followed by the hot-pressed sesame oil. The blends of pressed sesame oils with refined sesame oil were more difficult to be distinguished than the blends of pressed sesame oils and refined soybean oil. LDA, KNN, and SVM outperformed the other classification methods in distinguishing sesame oil blends. KNN, LASSO, PLS, and SVM (with linear kernel), and RF models could adequately predict the adulteration level (% of added soybean oil) in the sesame oil blends. Among the prediction models, KNN with k = 1 and 2 yielded the best prediction results.

  7. Wavelet threshold method of resolving noise interference in periodic short-impulse signals chaotic detection

    NASA Astrophysics Data System (ADS)

    Deng, Ke; Zhang, Lu; Luo, Mao-Kang

    2010-03-01

    The chaotic oscillator has already been considered as a powerful method to detect weak signals, even weak signals accompanied with noises. However, many examples, analyses and simulations indicate that chaotic oscillator detection system cannot guarantee the immunity to noises (even white noise). In fact the randomness of noises has a serious or even a destructive effect on the detection results in many cases. To solve this problem, we present a new detecting method based on wavelet threshold processing that can detect the chaotic weak signal accompanied with noise. All theoretical analyses and simulation experiments indicate that the new method reduces the noise interferences to detection significantly, thereby making the corresponding chaotic oscillator that detects the weak signals accompanied with noises more stable and reliable.

  8. Beamforming design with proactive interference cancelation in MISO interference channels

    NASA Astrophysics Data System (ADS)

    Li, Yang; Tian, Yafei; Yang, Chenyang

    2015-12-01

    In this paper, we design coordinated beamforming at base stations (BSs) to facilitate interference cancelation at users in interference networks, where each BS is equipped with multiple antennas and each user is with a single antenna. By assuming that each user can select the best decoding strategy to mitigate the interference, either canceling the interference after decoding when it is strong or treating it as noise when it is weak, we optimize the beamforming vectors that maximize the sum rate for the networks under different interference scenarios and find the solutions of beamforming with closed-form expressions. The inherent design principles are then analyzed, and the performance gain over passive interference cancelation is demonstrated through simulations in heterogeneous cellular networks.

  9. Hierarchical classification method and its application in shape representation

    NASA Astrophysics Data System (ADS)

    Ireton, M. A.; Oakley, John P.; Xydeas, Costas S.

    1992-04-01

    In this paper we describe a technique for performing shaped-based content retrieval of images from a large database. In order to be able to formulate such user-generated queries about visual objects, we have developed an hierarchical classification technique. This hierarchical classification technique enables similarity matching between objects, with the position in the hierarchy signifying the level of generality to be used in the query. The classification technique is unsupervised, robust, and general; it can be applied to any suitable parameter set. To establish the potential of this classifier for aiding visual querying, we have applied it to the classification of the 2-D outlines of leaves.

  10. Discussion and a new method of optical cryptosystem based on interference

    NASA Astrophysics Data System (ADS)

    Lu, Dajiang; He, Wenqi; Liao, Meihua; Peng, Xiang

    2017-02-01

    A discussion and an objective security analysis of the well-known optical image encryption based on interference are presented in this paper. A new method is also proposed to eliminate the security risk of the original cryptosystem. For a possible practical application, we expand this new method into a hierarchical authentication scheme. In this authentication system, with a pre-generated and fixed random phase lock, different target images indicating different authentication levels are analytically encoded into corresponding phase-only masks (phase keys) and amplitude-only masks (amplitude keys). For the authentication process, a legal user can obtain a specified target image at the output plane if his/her phase key, and amplitude key, which should be settled close against the fixed internal phase lock, are respectively illuminated by two coherent beams. By comparing the target image with all the standard certification images in the database, the system can thus verify the user's legality even his/her identity level. Moreover, in despite of the internal phase lock of this system being fixed, the crosstalk between different pairs of keys held by different users is low. Theoretical analysis and numerical simulation are both provided to demonstrate the validity of this method.

  11. Assessment and control of spacecraft electromagnetic interference

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Design criteria are presented to provide guidance in assessing electromagnetic interference from onboard sources and establishing requisite control in spacecraft design, development, and testing. A comprehensive state-of-the-art review is given which covers flight experience, sources and transmission of electromagnetic interference, susceptible equipment, design procedure, control techniques, and test methods.

  12. A new ICA-based fingerprint method for the automatic removal of physiological artifacts from EEG recordings.

    PubMed

    Tamburro, Gabriella; Fiedler, Patrique; Stone, David; Haueisen, Jens; Comani, Silvia

    2018-01-01

    EEG may be affected by artefacts hindering the analysis of brain signals. Data-driven methods like independent component analysis (ICA) are successful approaches to remove artefacts from the EEG. However, the ICA-based methods developed so far are often affected by limitations, such as: the need for visual inspection of the separated independent components (subjectivity problem) and, in some cases, for the independent and simultaneous recording of the inspected artefacts to identify the artefactual independent components; a potentially heavy manipulation of the EEG signals; the use of linear classification methods; the use of simulated artefacts to validate the methods; no testing in dry electrode or high-density EEG datasets; applications limited to specific conditions and electrode layouts. Our fingerprint method automatically identifies EEG ICs containing eyeblinks, eye movements, myogenic artefacts and cardiac interference by evaluating 14 temporal, spatial, spectral, and statistical features composing the IC fingerprint. Sixty-two real EEG datasets containing cued artefacts are recorded with wet and dry electrodes (128 wet and 97 dry channels). For each artefact, 10 nonlinear SVM classifiers are trained on fingerprints of expert-classified ICs. Training groups include randomly chosen wet and dry datasets decomposed in 80 ICs. The classifiers are tested on the IC-fingerprints of different datasets decomposed into 20, 50, or 80 ICs. The SVM performance is assessed in terms of accuracy, False Omission Rate (FOR), Hit Rate (HR), False Alarm Rate (FAR), and sensitivity ( p ). For each artefact, the quality of the artefact-free EEG reconstructed using the classification of the best SVM is assessed by visual inspection and SNR. The best SVM classifier for each artefact type achieved average accuracy of 1 (eyeblink), 0.98 (cardiac interference), and 0.97 (eye movement and myogenic artefact). Average classification sensitivity (p) was 1 (eyeblink), 0.997 (myogenic

  13. Sentiment analysis of feature ranking methods for classification accuracy

    NASA Astrophysics Data System (ADS)

    Joseph, Shashank; Mugauri, Calvin; Sumathy, S.

    2017-11-01

    Text pre-processing and feature selection are important and critical steps in text mining. Text pre-processing of large volumes of datasets is a difficult task as unstructured raw data is converted into structured format. Traditional methods of processing and weighing took much time and were less accurate. To overcome this challenge, feature ranking techniques have been devised. A feature set from text preprocessing is fed as input for feature selection. Feature selection helps improve text classification accuracy. Of the three feature selection categories available, the filter category will be the focus. Five feature ranking methods namely: document frequency, standard deviation information gain, CHI-SQUARE, and weighted-log likelihood -ratio is analyzed.

  14. Effect of the atmosphere on the classification of LANDSAT data. [Identifying sugar canes in Brazil

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Morimoto, T.; Kumar, R.; Molion, L. C. B.

    1979-01-01

    The author has identified the following significant results. In conjunction with Turner's model for the correction of satellite data for atmospheric interference, the LOWTRAN-3 computer was used to calculate the atmospheric interference. Use of the program improved the contrast between different natural targets in the MSS LANDSAT data of Brasilia, Brazil. The classification accuracy of sugar canes was improved by about 9% in the multispectral data of Ribeirao Preto, Sao Paulo.

  15. Importance of resonance interference effects in multigroup self-shielding calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stachowski, R.E.; Protsik, R.

    1995-12-31

    The impact of the resonance interference method (RIF) on multigroup neutron cross sections is significant for major isotopes in the fuel, indicating the importance of resonance interference in the computation of gadolinia burnout and plutonium buildup. The self-shielding factor method with the RIF method effectively eliminates shortcomings in multigroup resonance calculations.

  16. Threshold selection for classification of MR brain images by clustering method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moldovanu, Simona; Dumitru Moţoc High School, 15 Milcov St., 800509, Galaţi; Obreja, Cristian

    Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzedmore » images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.« less

  17. Threshold selection for classification of MR brain images by clustering method

    NASA Astrophysics Data System (ADS)

    Moldovanu, Simona; Obreja, Cristian; Moraru, Luminita

    2015-12-01

    Given a grey-intensity image, our method detects the optimal threshold for a suitable binarization of MR brain images. In MR brain image processing, the grey levels of pixels belonging to the object are not substantially different from the grey levels belonging to the background. Threshold optimization is an effective tool to separate objects from the background and further, in classification applications. This paper gives a detailed investigation on the selection of thresholds. Our method does not use the well-known method for binarization. Instead, we perform a simple threshold optimization which, in turn, will allow the best classification of the analyzed images into healthy and multiple sclerosis disease. The dissimilarity (or the distance between classes) has been established using the clustering method based on dendrograms. We tested our method using two classes of images: the first consists of 20 T2-weighted and 20 proton density PD-weighted scans from two healthy subjects and from two patients with multiple sclerosis. For each image and for each threshold, the number of the white pixels (or the area of white objects in binary image) has been determined. These pixel numbers represent the objects in clustering operation. The following optimum threshold values are obtained, T = 80 for PD images and T = 30 for T2w images. Each mentioned threshold separate clearly the clusters that belonging of the studied groups, healthy patient and multiple sclerosis disease.

  18. Singularities of interference of three waves with different polarization states.

    PubMed

    Kurzynowski, Piotr; Woźniak, Władysław A; Zdunek, Marzena; Borwińska, Monika

    2012-11-19

    We presented the interference setup which can produce interesting two-dimensional patterns in polarization state of the resulting light wave emerging from the setup. The main element of our setup is the Wollaston prism which gives two plane, linearly polarized waves (eigenwaves of both Wollaston's wedges) with linearly changed phase difference between them (along the x-axis). The third wave coming from the second arm of proposed polarization interferometer is linearly or circularly polarized with linearly changed phase difference along the y-axis. The interference of three plane waves with different polarization states (LLL - linear-linear-linear or LLC - linear-linear-circular) and variable change difference produce two-dimensional light polarization and phase distributions with some characteristic points and lines which can be claimed to constitute singularities of different types. The aim of this article is to find all kind of these phase and polarization singularities as well as their classification. We postulated in our theoretical simulations and verified in our experiments different kinds of polarization singularities, depending on which polarization parameter was considered (the azimuth and ellipticity angles or the diagonal and phase angles). We also observed the phase singularities as well as the isolated zero intensity points which resulted from the polarization singularities when the proper analyzer was used at the end of the setup. The classification of all these singularities as well as their relationships were analyzed and described.

  19. Visualizing the Solute Vaporization Interference in Flame Atomic Absorption Spectroscopy

    ERIC Educational Resources Information Center

    Dockery, Christopher R.; Blew, Michael J.; Goode, Scott R.

    2008-01-01

    Every day, tens of thousands of chemists use analytical atomic spectroscopy in their work, often without knowledge of possible interferences. We present a unique approach to study these interferences by using modern response surface methods to visualize an interference in which aluminum depresses the calcium atomic absorption signal. Calcium…

  20. The Pixon Method for Data Compression Image Classification, and Image Reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard; Yahil, Amos

    2002-01-01

    As initially proposed, this program had three goals: (1) continue to develop the highly successful Pixon method for image reconstruction and support other scientist in implementing this technique for their applications; (2) develop image compression techniques based on the Pixon method; and (3) develop artificial intelligence algorithms for image classification based on the Pixon approach for simplifying neural networks. Subsequent to proposal review the scope of the program was greatly reduced and it was decided to investigate the ability of the Pixon method to provide superior restorations of images compressed with standard image compression schemes, specifically JPEG-compressed images.

  1. Surveillance system and method having an operating mode partitioned fault classification model

    NASA Technical Reports Server (NTRS)

    Bickford, Randall L. (Inventor)

    2005-01-01

    A system and method which partitions a parameter estimation model, a fault detection model, and a fault classification model for a process surveillance scheme into two or more coordinated submodels together providing improved diagnostic decision making for at least one determined operating mode of an asset.

  2. 'Quantum interference with slits' revisited

    NASA Astrophysics Data System (ADS)

    Rothman, Tony; Boughn, Stephen

    2011-01-01

    Marcella has presented a straightforward technique employing the Dirac formalism to calculate single- and double-slit interference patterns. He claims that no reference is made to classical optics or scattering theory and that his method therefore provides a purely quantum mechanical description of these experiments. He also presents his calculation as if no approximations are employed. We show that he implicitly makes the same approximations found in classical treatments of interference and that no new physics has been introduced. At the same time, some of the quantum mechanical arguments Marcella gives are, at best, misleading.

  3. Comparison of Different Classification Methods for Analyzing Electronic Nose Data to Characterize Sesame Oils and Blends

    PubMed Central

    Shao, Xiaolong; Li, Hui; Wang, Nan; Zhang, Qiang

    2015-01-01

    An electronic nose (e-nose) was used to characterize sesame oils processed by three different methods (hot-pressed, cold-pressed, and refined), as well as blends of the sesame oils and soybean oil. Seven classification and prediction methods, namely PCA, LDA, PLS, KNN, SVM, LASSO and RF, were used to analyze the e-nose data. The classification accuracy and MAUC were employed to evaluate the performance of these methods. The results indicated that sesame oils processed with different methods resulted in different sensor responses, with cold-pressed sesame oil producing the strongest sensor signals, followed by the hot-pressed sesame oil. The blends of pressed sesame oils with refined sesame oil were more difficult to be distinguished than the blends of pressed sesame oils and refined soybean oil. LDA, KNN, and SVM outperformed the other classification methods in distinguishing sesame oil blends. KNN, LASSO, PLS, and SVM (with linear kernel), and RF models could adequately predict the adulteration level (% of added soybean oil) in the sesame oil blends. Among the prediction models, KNN with k = 1 and 2 yielded the best prediction results. PMID:26506350

  4. Comparison of EEG-Features and Classification Methods for Motor Imagery in Patients with Disorders of Consciousness

    PubMed Central

    Höller, Yvonne; Bergmann, Jürgen; Thomschewski, Aljoscha; Kronbichler, Martin; Höller, Peter; Crone, Julia S.; Schmid, Elisabeth V.; Butz, Kevin; Nardone, Raffaele; Trinka, Eugen

    2013-01-01

    Current research aims at identifying voluntary brain activation in patients who are behaviorally diagnosed as being unconscious, but are able to perform commands by modulating their brain activity patterns. This involves machine learning techniques and feature extraction methods such as applied in brain computer interfaces. In this study, we try to answer the question if features/classification methods which show advantages in healthy participants are also accurate when applied to data of patients with disorders of consciousness. A sample of healthy participants (N = 22), patients in a minimally conscious state (MCS; N = 5), and with unresponsive wakefulness syndrome (UWS; N = 9) was examined with a motor imagery task which involved imagery of moving both hands and an instruction to hold both hands firm. We extracted a set of 20 features from the electroencephalogram and used linear discriminant analysis, k-nearest neighbor classification, and support vector machines (SVM) as classification methods. In healthy participants, the best classification accuracies were seen with coherences (mean = .79; range = .53−.94) and power spectra (mean = .69; range = .40−.85). The coherence patterns in healthy participants did not match the expectation of central modulated -rhythm. Instead, coherence involved mainly frontal regions. In healthy participants, the best classification tool was SVM. Five patients had at least one feature-classifier outcome with p0.05 (none of which were coherence or power spectra), though none remained significant after false-discovery rate correction for multiple comparisons. The present work suggests the use of coherences in patients with disorders of consciousness because they show high reliability among healthy subjects and patient groups. However, feature extraction and classification is a challenging task in unresponsive patients because there is no ground truth to validate the results. PMID:24282545

  5. Decision tree methods: applications for classification and prediction.

    PubMed

    Song, Yan-Yan; Lu, Ying

    2015-04-25

    Decision tree methodology is a commonly used data mining method for establishing classification systems based on multiple covariates or for developing prediction algorithms for a target variable. This method classifies a population into branch-like segments that construct an inverted tree with a root node, internal nodes, and leaf nodes. The algorithm is non-parametric and can efficiently deal with large, complicated datasets without imposing a complicated parametric structure. When the sample size is large enough, study data can be divided into training and validation datasets. Using the training dataset to build a decision tree model and a validation dataset to decide on the appropriate tree size needed to achieve the optimal final model. This paper introduces frequently used algorithms used to develop decision trees (including CART, C4.5, CHAID, and QUEST) and describes the SPSS and SAS programs that can be used to visualize tree structure.

  6. Fingerprint extraction from interference destruction terahertz spectrum.

    PubMed

    Xiong, Wei; Shen, Jingling

    2010-10-11

    In this paper, periodic peaks in a terahertz absorption spectrum are confirmed to be induced from interference effects. Theoretically, we explained the periodic peaks and calculated the locations of them. Accordingly, a technique was suggested, with which the interference peaks in a terahertz spectrum can be eliminated and therefore a real terahertz absorption spectrum can be obtained. Experimentally, a sample, Methamphetamine, was investigated and its terahertz fingerprint was successfully extracted from its interference destruction spectrum. This technique is useful in getting samples' terahertz fingerprint spectra, and furthermore provides a fast nondestructive testing method using a large size terahertz beam to identify materials.

  7. Neural systems and time course of proactive interference in working memory.

    PubMed

    Du, Yingchun; Zhang, John X; Xiao, Zhuangwei; Wu, Renhua

    2007-01-01

    The storage of information in working memory suffers as a function of proactive interference. Many works using neuroimaging technique have been done to reveal the brain mechanism of interference resolution. However, less is yet known about the time course of this process. Event-related potential method(ERP) and standardized Low Resolution Brain Electromagnetic Tomography method (sLORETA) were used in this study to discover the time course of interference resolution in working memory. The anterior P2 was thought to reflect interference resolution and if so, this process occurred earlier in working memory than in long-term memory.

  8. Proposal of Classification Method of Time Series Data in International Emissions Trading Market Using Agent-based Simulation

    NASA Astrophysics Data System (ADS)

    Nakada, Tomohiro; Takadama, Keiki; Watanabe, Shigeyoshi

    This paper proposes the classification method using Bayesian analytical method to classify the time series data in the international emissions trading market depend on the agent-based simulation and compares the case with Discrete Fourier transform analytical method. The purpose demonstrates the analytical methods mapping time series data such as market price. These analytical methods have revealed the following results: (1) the classification methods indicate the distance of mapping from the time series data, it is easier the understanding and inference than time series data; (2) these methods can analyze the uncertain time series data using the distance via agent-based simulation including stationary process and non-stationary process; and (3) Bayesian analytical method can show the 1% difference description of the emission reduction targets of agent.

  9. Object-based methods for individual tree identification and tree species classification from high-spatial resolution imagery

    NASA Astrophysics Data System (ADS)

    Wang, Le

    2003-10-01

    Modern forest management poses an increasing need for detailed knowledge of forest information at different spatial scales. At the forest level, the information for tree species assemblage is desired whereas at or below the stand level, individual tree related information is preferred. Remote Sensing provides an effective tool to extract the above information at multiple spatial scales in the continuous time domain. To date, the increasing volume and readily availability of high-spatial-resolution data have lead to a much wider application of remotely sensed products. Nevertheless, to make effective use of the improving spatial resolution, conventional pixel-based classification methods are far from satisfactory. Correspondingly, developing object-based methods becomes a central challenge for researchers in the field of Remote Sensing. This thesis focuses on the development of methods for accurate individual tree identification and tree species classification. We develop a method in which individual tree crown boundaries and treetop locations are derived under a unified framework. We apply a two-stage approach with edge detection followed by marker-controlled watershed segmentation. Treetops are modeled from radiometry and geometry aspects. Specifically, treetops are assumed to be represented by local radiation maxima and to be located near the center of the tree-crown. As a result, a marker image was created from the derived treetop to guide a watershed segmentation to further differentiate overlapping trees and to produce a segmented image comprised of individual tree crowns. The image segmentation method developed achieves a promising result for a 256 x 256 CASI image. Then further effort is made to extend our methods to the multiscales which are constructed from a wavelet decomposition. A scale consistency and geometric consistency are designed to examine the gradients along the scale-space for the purpose of separating true crown boundary from unwanted

  10. Modulation Classification of Satellite Communication Signals Using Cumulants and Neural Networks

    NASA Technical Reports Server (NTRS)

    Smith, Aaron; Evans, Michael; Downey, Joseph

    2017-01-01

    National Aeronautics and Space Administration (NASA)'s future communication architecture is evaluating cognitive technologies and increased system intelligence. These technologies are expected to reduce the operational complexity of the network, increase science data return, and reduce interference to self and others. In order to increase situational awareness, signal classification algorithms could be applied to identify users and distinguish sources of interference. A significant amount of previous work has been done in the area of automatic signal classification for military and commercial applications. As a preliminary step, we seek to develop a system with the ability to discern signals typically encountered in satellite communication. Proposed is an automatic modulation classifier which utilizes higher order statistics (cumulants) and an estimate of the signal-to-noise ratio. These features are extracted from baseband symbols and then processed by a neural network for classification. The modulation types considered are phase-shift keying (PSK), amplitude and phase-shift keying (APSK),and quadrature amplitude modulation (QAM). Physical layer properties specific to the Digital Video Broadcasting - Satellite- Second Generation (DVB-S2) standard, such as pilots and variable ring ratios, are also considered. This paper will provide simulation results of a candidate modulation classifier, and performance will be evaluated over a range of signal-to-noise ratios, frequency offsets, and nonlinear amplifier distortions.

  11. Classification method, spectral diversity, band combination and accuracy assessment evaluation for urban feature detection

    NASA Astrophysics Data System (ADS)

    Erener, A.

    2013-04-01

    Automatic extraction of urban features from high resolution satellite images is one of the main applications in remote sensing. It is useful for wide scale applications, namely: urban planning, urban mapping, disaster management, GIS (geographic information systems) updating, and military target detection. One common approach to detecting urban features from high resolution images is to use automatic classification methods. This paper has four main objectives with respect to detecting buildings. The first objective is to compare the performance of the most notable supervised classification algorithms, including the maximum likelihood classifier (MLC) and the support vector machine (SVM). In this experiment the primary consideration is the impact of kernel configuration on the performance of the SVM. The second objective of the study is to explore the suitability of integrating additional bands, namely first principal component (1st PC) and the intensity image, for original data for multi classification approaches. The performance evaluation of classification results is done using two different accuracy assessment methods: pixel based and object based approaches, which reflect the third aim of the study. The objective here is to demonstrate the differences in the evaluation of accuracies of classification methods. Considering consistency, the same set of ground truth data which is produced by labeling the building boundaries in the GIS environment is used for accuracy assessment. Lastly, the fourth aim is to experimentally evaluate variation in the accuracy of classifiers for six different real situations in order to identify the impact of spatial and spectral diversity on results. The method is applied to Quickbird images for various urban complexity levels, extending from simple to complex urban patterns. The simple surface type includes a regular urban area with low density and systematic buildings with brick rooftops. The complex surface type involves almost all

  12. Photometric brown-dwarf classification. I. A method to identify and accurately classify large samples of brown dwarfs without spectroscopy

    NASA Astrophysics Data System (ADS)

    Skrzypek, N.; Warren, S. J.; Faherty, J. K.; Mortlock, D. J.; Burgasser, A. J.; Hewett, P. C.

    2015-02-01

    Aims: We present a method, named photo-type, to identify and accurately classify L and T dwarfs onto the standard spectral classification system using photometry alone. This enables the creation of large and deep homogeneous samples of these objects efficiently, without the need for spectroscopy. Methods: We created a catalogue of point sources with photometry in 8 bands, ranging from 0.75 to 4.6 μm, selected from an area of 3344 deg2, by combining SDSS, UKIDSS LAS, and WISE data. Sources with 13.0 0.8, were then classified by comparison against template colours of quasars, stars, and brown dwarfs. The L and T templates, spectral types L0 to T8, were created by identifying previously known sources with spectroscopic classifications, and fitting polynomial relations between colour and spectral type. Results: Of the 192 known L and T dwarfs with reliable photometry in the surveyed area and magnitude range, 189 are recovered by our selection and classification method. We have quantified the accuracy of the classification method both externally, with spectroscopy, and internally, by creating synthetic catalogues and accounting for the uncertainties. We find that, brighter than J = 17.5, photo-type classifications are accurate to one spectral sub-type, and are therefore competitive with spectroscopic classifications. The resultant catalogue of 1157 L and T dwarfs will be presented in a companion paper.

  13. Interference thinking in constructing students’ knowledge to solve mathematical problems

    NASA Astrophysics Data System (ADS)

    Jayanti, W. E.; Usodo, B.; Subanti, S.

    2018-04-01

    This research aims to describe interference thinking in constructing students’ knowledge to solve mathematical problems. Interference thinking in solving problems occurs when students have two concepts that interfere with each other’s concept. Construction of problem-solving can be traced using Piaget’s assimilation and accommodation framework, helping to know the students’ thinking structures in solving the problems. The method of this research was a qualitative method with case research strategy. The data in this research involving problem-solving result and transcripts of interviews about students’ errors in solving the problem. The results of this research focus only on the student who experience proactive interference, where student in solving a problem using old information to interfere with the ability to recall new information. The student who experience interference thinking in constructing their knowledge occurs when the students’ thinking structures in the assimilation and accommodation process are incomplete. However, after being given reflection to the student, then the students’ thinking process has reached equilibrium condition even though the result obtained remains wrong.

  14. Factors That Affect Large Subunit Ribosomal DNA Amplicon Sequencing Studies of Fungal Communities: Classification Method, Primer Choice, and Error

    PubMed Central

    Porter, Teresita M.; Golding, G. Brian

    2012-01-01

    Nuclear large subunit ribosomal DNA is widely used in fungal phylogenetics and to an increasing extent also amplicon-based environmental sequencing. The relatively short reads produced by next-generation sequencing, however, makes primer choice and sequence error important variables for obtaining accurate taxonomic classifications. In this simulation study we tested the performance of three classification methods: 1) a similarity-based method (BLAST + Metagenomic Analyzer, MEGAN); 2) a composition-based method (Ribosomal Database Project naïve Bayesian classifier, NBC); and, 3) a phylogeny-based method (Statistical Assignment Package, SAP). We also tested the effects of sequence length, primer choice, and sequence error on classification accuracy and perceived community composition. Using a leave-one-out cross validation approach, results for classifications to the genus rank were as follows: BLAST + MEGAN had the lowest error rate and was particularly robust to sequence error; SAP accuracy was highest when long LSU query sequences were classified; and, NBC runs significantly faster than the other tested methods. All methods performed poorly with the shortest 50–100 bp sequences. Increasing simulated sequence error reduced classification accuracy. Community shifts were detected due to sequence error and primer selection even though there was no change in the underlying community composition. Short read datasets from individual primers, as well as pooled datasets, appear to only approximate the true community composition. We hope this work informs investigators of some of the factors that affect the quality and interpretation of their environmental gene surveys. PMID:22558215

  15. Interactions between pre-processing and classification methods for event-related-potential classification: best-practice guidelines for brain-computer interfacing.

    PubMed

    Farquhar, J; Hill, N J

    2013-04-01

    Detecting event related potentials (ERPs) from single trials is critical to the operation of many stimulus-driven brain computer interface (BCI) systems. The low strength of the ERP signal compared to the noise (due to artifacts and BCI irrelevant brain processes) makes this a challenging signal detection problem. Previous work has tended to focus on how best to detect a single ERP type (such as the visual oddball response). However, the underlying ERP detection problem is essentially the same regardless of stimulus modality (e.g., visual or tactile), ERP component (e.g., P300 oddball response, or the error-potential), measurement system or electrode layout. To investigate whether a single ERP detection method might work for a wider range of ERP BCIs we compare detection performance over a large corpus of more than 50 ERP BCI datasets whilst systematically varying the electrode montage, spectral filter, spatial filter and classifier training methods. We identify an interesting interaction between spatial whitening and regularised classification which made detection performance independent of the choice of spectral filter low-pass frequency. Our results show that pipeline consisting of spectral filtering, spatial whitening, and regularised classification gives near maximal performance in all cases. Importantly, this pipeline is simple to implement and completely automatic with no expert feature selection or parameter tuning required. Thus, we recommend this combination as a "best-practice" method for ERP detection problems.

  16. Description and evaluation of an interference assessment for a slotted-wall wind tunnel

    NASA Technical Reports Server (NTRS)

    Kemp, William B., Jr.

    1991-01-01

    A wind-tunnel interference assessment method applicable to test sections with discrete finite-length wall slots is described. The method is based on high order panel method technology and uses mixed boundary conditions to satisfy both the tunnel geometry and wall pressure distributions measured in the slotted-wall region. Both the test model and its sting support system are represented by distributed singularities. The method yields interference corrections to the model test data as well as surveys through the interference field at arbitrary locations. These results include the equivalent of tunnel Mach calibration, longitudinal pressure gradient, tunnel flow angularity, wall interference, and an inviscid form of sting interference. Alternative results which omit the direct contribution of the sting are also produced. The method was applied to the National Transonic Facility at NASA Langley Research Center for both tunnel calibration tests and tests of two models of subsonic transport configurations.

  17. Near-Field Noise Source Localization in the Presence of Interference

    NASA Astrophysics Data System (ADS)

    Liang, Guolong; Han, Bo

    In order to suppress the influence of interference sources on the noise source localization in the near field, the near-field broadband source localization in the presence of interference is studied. Oblique projection is constructed with the array measurements and the steering manifold of interference sources, which is used to filter the interference signals out. 2D-MUSIC algorithm is utilized to deal with the data in each frequency, and then the results of each frequency are averaged to achieve the positioning of the broadband noise sources. The simulations show that this method suppresses the interference sources effectively and is capable of locating the source which is in the same direction with the interference source.

  18. An integrated method for cancer classification and rule extraction from microarray data

    PubMed Central

    Huang, Liang-Tsung

    2009-01-01

    Different microarray techniques recently have been successfully used to investigate useful information for cancer diagnosis at the gene expression level due to their ability to measure thousands of gene expression levels in a massively parallel way. One important issue is to improve classification performance of microarray data. However, it would be ideal that influential genes and even interpretable rules can be explored at the same time to offer biological insight. Introducing the concepts of system design in software engineering, this paper has presented an integrated and effective method (named X-AI) for accurate cancer classification and the acquisition of knowledge from DNA microarray data. This method included a feature selector to systematically extract the relative important genes so as to reduce the dimension and retain as much as possible of the class discriminatory information. Next, diagonal quadratic discriminant analysis (DQDA) was combined to classify tumors, and generalized rule induction (GRI) was integrated to establish association rules which can give an understanding of the relationships between cancer classes and related genes. Two non-redundant datasets of acute leukemia were used to validate the proposed X-AI, showing significantly high accuracy for discriminating different classes. On the other hand, I have presented the abilities of X-AI to extract relevant genes, as well as to develop interpretable rules. Further, a web server has been established for cancer classification and it is freely available at . PMID:19272192

  19. Exploration of computational methods for classification of movement intention during human voluntary movement from single trial EEG.

    PubMed

    Bai, Ou; Lin, Peter; Vorbach, Sherry; Li, Jiang; Furlani, Steve; Hallett, Mark

    2007-12-01

    To explore effective combinations of computational methods for the prediction of movement intention preceding the production of self-paced right and left hand movements from single trial scalp electroencephalogram (EEG). Twelve naïve subjects performed self-paced movements consisting of three key strokes with either hand. EEG was recorded from 128 channels. The exploration was performed offline on single trial EEG data. We proposed that a successful computational procedure for classification would consist of spatial filtering, temporal filtering, feature selection, and pattern classification. A systematic investigation was performed with combinations of spatial filtering using principal component analysis (PCA), independent component analysis (ICA), common spatial patterns analysis (CSP), and surface Laplacian derivation (SLD); temporal filtering using power spectral density estimation (PSD) and discrete wavelet transform (DWT); pattern classification using linear Mahalanobis distance classifier (LMD), quadratic Mahalanobis distance classifier (QMD), Bayesian classifier (BSC), multi-layer perceptron neural network (MLP), probabilistic neural network (PNN), and support vector machine (SVM). A robust multivariate feature selection strategy using a genetic algorithm was employed. The combinations of spatial filtering using ICA and SLD, temporal filtering using PSD and DWT, and classification methods using LMD, QMD, BSC and SVM provided higher performance than those of other combinations. Utilizing one of the better combinations of ICA, PSD and SVM, the discrimination accuracy was as high as 75%. Further feature analysis showed that beta band EEG activity of the channels over right sensorimotor cortex was most appropriate for discrimination of right and left hand movement intention. Effective combinations of computational methods provide possible classification of human movement intention from single trial EEG. Such a method could be the basis for a potential brain

  20. A new ICA-based fingerprint method for the automatic removal of physiological artifacts from EEG recordings

    PubMed Central

    Tamburro, Gabriella; Fiedler, Patrique; Stone, David; Haueisen, Jens

    2018-01-01

    Background EEG may be affected by artefacts hindering the analysis of brain signals. Data-driven methods like independent component analysis (ICA) are successful approaches to remove artefacts from the EEG. However, the ICA-based methods developed so far are often affected by limitations, such as: the need for visual inspection of the separated independent components (subjectivity problem) and, in some cases, for the independent and simultaneous recording of the inspected artefacts to identify the artefactual independent components; a potentially heavy manipulation of the EEG signals; the use of linear classification methods; the use of simulated artefacts to validate the methods; no testing in dry electrode or high-density EEG datasets; applications limited to specific conditions and electrode layouts. Methods Our fingerprint method automatically identifies EEG ICs containing eyeblinks, eye movements, myogenic artefacts and cardiac interference by evaluating 14 temporal, spatial, spectral, and statistical features composing the IC fingerprint. Sixty-two real EEG datasets containing cued artefacts are recorded with wet and dry electrodes (128 wet and 97 dry channels). For each artefact, 10 nonlinear SVM classifiers are trained on fingerprints of expert-classified ICs. Training groups include randomly chosen wet and dry datasets decomposed in 80 ICs. The classifiers are tested on the IC-fingerprints of different datasets decomposed into 20, 50, or 80 ICs. The SVM performance is assessed in terms of accuracy, False Omission Rate (FOR), Hit Rate (HR), False Alarm Rate (FAR), and sensitivity (p). For each artefact, the quality of the artefact-free EEG reconstructed using the classification of the best SVM is assessed by visual inspection and SNR. Results The best SVM classifier for each artefact type achieved average accuracy of 1 (eyeblink), 0.98 (cardiac interference), and 0.97 (eye movement and myogenic artefact). Average classification sensitivity (p) was 1

  1. Methods of automated absence seizure detection, interference by stimulation, and possibilities for prediction in genetic absence models.

    PubMed

    van Luijtelaar, Gilles; Lüttjohann, Annika; Makarov, Vladimir V; Maksimenko, Vladimir A; Koronovskii, Alexei A; Hramov, Alexander E

    2016-02-15

    Genetic rat models for childhood absence epilepsy have become instrumental in developing theories on the origin of absence epilepsy, the evaluation of new and experimental treatments, as well as in developing new methods for automatic seizure detection, prediction, and/or interference of seizures. Various methods for automated off and on-line analyses of ECoG in rodent models are reviewed, as well as data on how to interfere with the spike-wave discharges by different types of invasive and non-invasive electrical, magnetic, and optical brain stimulation. Also a new method for seizure prediction is proposed. Many selective and specific methods for off- and on-line spike-wave discharge detection seem excellent, with possibilities to overcome the issue of individual differences. Moreover, electrical deep brain stimulation is rather effective in interrupting ongoing spike-wave discharges with low stimulation intensity. A network based method is proposed for absence seizures prediction with a high sensitivity but a low selectivity. Solutions that prevent false alarms, integrated in a closed loop brain stimulation system open the ways for experimental seizure control. The presence of preictal cursor activity detected with state of the art time frequency and network analyses shows that spike-wave discharges are not caused by sudden and abrupt transitions but that there are detectable dynamic events. Their changes in time-space-frequency characteristics might yield new options for seizure prediction and seizure control. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Comparison of Classification Methods for Detecting Emotion from Mandarin Speech

    NASA Astrophysics Data System (ADS)

    Pao, Tsang-Long; Chen, Yu-Te; Yeh, Jun-Heng

    It is said that technology comes out from humanity. What is humanity? The very definition of humanity is emotion. Emotion is the basis for all human expression and the underlying theme behind everything that is done, said, thought or imagined. Making computers being able to perceive and respond to human emotion, the human-computer interaction will be more natural. Several classifiers are adopted for automatically assigning an emotion category, such as anger, happiness or sadness, to a speech utterance. These classifiers were designed independently and tested on various emotional speech corpora, making it difficult to compare and evaluate their performance. In this paper, we first compared several popular classification methods and evaluated their performance by applying them to a Mandarin speech corpus consisting of five basic emotions, including anger, happiness, boredom, sadness and neutral. The extracted feature streams contain MFCC, LPCC, and LPC. The experimental results show that the proposed WD-MKNN classifier achieves an accuracy of 81.4% for the 5-class emotion recognition and outperforms other classification techniques, including KNN, MKNN, DW-KNN, LDA, QDA, GMM, HMM, SVM, and BPNN. Then, to verify the advantage of the proposed method, we compared these classifiers by applying them to another Mandarin expressive speech corpus consisting of two emotions. The experimental results still show that the proposed WD-MKNN outperforms others.

  3. EEG Classification with a Sequential Decision-Making Method in Motor Imagery BCI.

    PubMed

    Liu, Rong; Wang, Yongxuan; Newman, Geoffrey I; Thakor, Nitish V; Ying, Sarah

    2017-12-01

    To develop subject-specific classifier to recognize mental states fast and reliably is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this paper, a sequential decision-making strategy is explored in conjunction with an optimal wavelet analysis for EEG classification. The subject-specific wavelet parameters based on a grid-search method were first developed to determine evidence accumulative curve for the sequential classifier. Then we proposed a new method to set the two constrained thresholds in the sequential probability ratio test (SPRT) based on the cumulative curve and a desired expected stopping time. As a result, it balanced the decision time of each class, and we term it balanced threshold SPRT (BTSPRT). The properties of the method were illustrated on 14 subjects' recordings from offline and online tests. Results showed the average maximum accuracy of the proposed method to be 83.4% and the average decision time of 2.77[Formula: see text]s, when compared with 79.2% accuracy and a decision time of 3.01[Formula: see text]s for the sequential Bayesian (SB) method. The BTSPRT method not only improves the classification accuracy and decision speed comparing with the other nonsequential or SB methods, but also provides an explicit relationship between stopping time, thresholds and error, which is important for balancing the speed-accuracy tradeoff. These results suggest that BTSPRT would be useful in explicitly adjusting the tradeoff between rapid decision-making and error-free device control.

  4. Multipath interference test method using synthesized chirped signal from directly modulated DFB-LD with digital-signal-processing technique.

    PubMed

    Aida, Kazuo; Sugie, Toshihiko

    2011-12-12

    We propose a method of testing transmission fiber lines and distributed amplifiers. Multipath interference (MPI) is detected as a beat spectrum between a multipath signal and a direct signal using a synthesized chirped test signal with lightwave frequencies of f(1) and f(2) periodically emitted from a distributed feedback laser diode (DFB-LD). This chirped test pulse is generated using a directly modulated DFB-LD with a drive signal calculated using a digital signal processing technique (DSP). A receiver consisting of a photodiode and an electrical spectrum analyzer (ESA) detects a baseband power spectrum peak appearing at the frequency of the test signal frequency deviation (f(1)-f(2)) as a beat spectrum of self-heterodyne detection. Multipath interference is converted from the spectrum peak power. This method improved the minimum detectable MPI to as low as -78 dB. We discuss the detailed design and performance of the proposed test method, including a DFB-LD drive signal calculation algorithm with DSP for synthesis of the chirped test signal and experiments on single-mode fibers with discrete reflections. © 2011 Optical Society of America

  5. A wall interference assessment/correction system

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.; Ulbrich, N.; Sickles, W. L.; Qian, Cathy X.

    1992-01-01

    A Wall Signature method, the Hackett method, has been selected to be adapted for the 12-ft Wind Tunnel wall interference assessment/correction (WIAC) system in the present phase. This method uses limited measurements of the static pressure at the wall, in conjunction with the solid wall boundary condition, to determine the strength and distribution of singularities representing the test article. The singularities are used in turn for estimating wall interferences at the model location. The Wall Signature method will be formulated for application to the unique geometry of the 12-ft Tunnel. The development and implementation of a working prototype will be completed, delivered and documented with a software manual. The WIAC code will be validated by conducting numerically simulated experiments rather than actual wind tunnel experiments. The simulations will be used to generate both free-air and confined wind-tunnel flow fields for each of the test articles over a range of test configurations. Specifically, the pressure signature at the test section wall will be computed for the tunnel case to provide the simulated 'measured' data. These data will serve as the input for the WIAC method-Wall Signature method. The performance of the WIAC method then may be evaluated by comparing the corrected parameters with those for the free-air simulation. Each set of wind tunnel/test article numerical simulations provides data to validate the WIAC method. A numerical wind tunnel test simulation is initiated to validate the WIAC methods developed in the project. In the present reported period, the blockage correction has been developed and implemented for a rectangular tunnel as well as the 12-ft Pressure Tunnel. An improved wall interference assessment and correction method for three-dimensional wind tunnel testing is presented in the appendix.

  6. Talker Localization Based on Interference between Transmitted and Reflected Audible Sound

    NASA Astrophysics Data System (ADS)

    Nakayama, Masato; Nakasako, Noboru; Shinohara, Toshihiro; Uebo, Tetsuji

    In many engineering fields, distance to targets is very important. General distance measurement method uses a time delay between transmitted and reflected waves, but it is difficult to estimate the short distance. On the other hand, the method using phase interference to measure the short distance has been known in the field of microwave radar. Therefore, we have proposed the distance estimation method based on interference between transmitted and reflected audible sound, which can measure the distance between microphone and target with one microphone and one loudspeaker. In this paper, we propose talker localization method based on distance estimation using phase interference. We expand the distance estimation method using phase interference into two microphones (microphone array) in order to estimate talker position. The proposed method can estimate talker position by measuring the distance and direction between target and microphone array. In addition, talker's speech is regarded as a noise in the proposed method. Therefore, we also propose combination of the proposed method and CSP (Cross-power Spectrum Phase analysis) method which is one of the DOA (Direction Of Arrival) estimation methods. We evaluated the performance of talker localization in real environments. The experimental result shows the effectiveness of the proposed method.

  7. IETS and quantum interference: Propensity rules in the presence of an interference feature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lykkebo, Jacob; Solomon, Gemma C., E-mail: gsolomon@nano.ku.dk; Gagliardi, Alessio

    2014-09-28

    Destructive quantum interference in single molecule electronics is an intriguing phenomenon; however, distinguishing quantum interference effects from generically low transmission is not trivial. In this paper, we discuss how quantum interference effects in the transmission lead to either low current or a particular line shape in current-voltage curves, depending on the position of the interference feature. Second, we consider how inelastic electron tunneling spectroscopy can be used to probe the presence of an interference feature by identifying vibrational modes that are selectively suppressed when quantum interference effects dominate. That is, we expand the understanding of propensity rules in inelastic electronmore » tunneling spectroscopy to molecules with destructive quantum interference.« less

  8. Enhanced conditioned eyeblink response acquisition and proactive interference in anxiety vulnerable individuals

    PubMed Central

    Holloway, Jacqueline L.; Trivedi, Payal; Myers, Catherine E.; Servatius, Richard J.

    2012-01-01

    In classical conditioning, proactive interference may arise from experience with the conditioned stimulus (CS), the unconditional stimulus (US), or both, prior to their paired presentations. Interest in the application of proactive interference has extended to clinical populations as either a risk factor for disorders or as a secondary sign. Although the current literature is dense with comparisons of stimulus pre-exposure effects in animals, such comparisons are lacking in human subjects. As such, interpretation of proactive interference over studies as well as its generalization and utility in clinical research is limited. The present study was designed to assess eyeblink response acquisition after equal numbers of CS, US, and explicitly unpaired CS and US pre-exposures, as well as to evaluate how anxiety vulnerability might modulate proactive interference. In the current study, anxiety vulnerability was assessed using the State/Trait Anxiety Inventories as well as the adult and retrospective measures of behavioral inhibition (AMBI and RMBI, respectively). Participants were exposed to 1 of 4 possible pre-exposure contingencies: 30 CS, 30 US, 30 CS, and 30 US explicitly unpaired pre-exposures, or Context pre-exposure, immediately prior to standard delay training. Robust proactive interference was evident in all pre-exposure groups relative to Context pre-exposure, independent of anxiety classification, with CR acquisition attenuated at similar rates. In addition, trait anxious individuals were found to have enhanced overall acquisition as well as greater proactive interference relative to non-vulnerable individuals. The findings suggest that anxiety vulnerable individuals learn implicit associations faster, an effect which persists after the introduction of new stimulus contingencies. This effect is not due to enhanced sensitivity to the US. Such differences would have implications for the development of anxiety psychopathology within a learning framework. PMID

  9. Enhanced conditioned eyeblink response acquisition and proactive interference in anxiety vulnerable individuals.

    PubMed

    Holloway, Jacqueline L; Trivedi, Payal; Myers, Catherine E; Servatius, Richard J

    2012-01-01

    In classical conditioning, proactive interference may arise from experience with the conditioned stimulus (CS), the unconditional stimulus (US), or both, prior to their paired presentations. Interest in the application of proactive interference has extended to clinical populations as either a risk factor for disorders or as a secondary sign. Although the current literature is dense with comparisons of stimulus pre-exposure effects in animals, such comparisons are lacking in human subjects. As such, interpretation of proactive interference over studies as well as its generalization and utility in clinical research is limited. The present study was designed to assess eyeblink response acquisition after equal numbers of CS, US, and explicitly unpaired CS and US pre-exposures, as well as to evaluate how anxiety vulnerability might modulate proactive interference. In the current study, anxiety vulnerability was assessed using the State/Trait Anxiety Inventories as well as the adult and retrospective measures of behavioral inhibition (AMBI and RMBI, respectively). Participants were exposed to 1 of 4 possible pre-exposure contingencies: 30 CS, 30 US, 30 CS, and 30 US explicitly unpaired pre-exposures, or Context pre-exposure, immediately prior to standard delay training. Robust proactive interference was evident in all pre-exposure groups relative to Context pre-exposure, independent of anxiety classification, with CR acquisition attenuated at similar rates. In addition, trait anxious individuals were found to have enhanced overall acquisition as well as greater proactive interference relative to non-vulnerable individuals. The findings suggest that anxiety vulnerable individuals learn implicit associations faster, an effect which persists after the introduction of new stimulus contingencies. This effect is not due to enhanced sensitivity to the US. Such differences would have implications for the development of anxiety psychopathology within a learning framework.

  10. Consensus Classification Using Non-Optimized Classifiers.

    PubMed

    Brownfield, Brett; Lemos, Tony; Kalivas, John H

    2018-04-03

    Classifying samples into categories is a common problem in analytical chemistry and other fields. Classification is usually based on only one method, but numerous classifiers are available with some being complex, such as neural networks, and others are simple, such as k nearest neighbors. Regardless, most classification schemes require optimization of one or more tuning parameters for best classification accuracy, sensitivity, and specificity. A process not requiring exact selection of tuning parameter values would be useful. To improve classification, several ensemble approaches have been used in past work to combine classification results from multiple optimized single classifiers. The collection of classifications for a particular sample are then combined by a fusion process such as majority vote to form the final classification. Presented in this Article is a method to classify a sample by combining multiple classification methods without specifically classifying the sample by each method, that is, the classification methods are not optimized. The approach is demonstrated on three analytical data sets. The first is a beer authentication set with samples measured on five instruments, allowing fusion of multiple instruments by three ways. The second data set is composed of textile samples from three classes based on Raman spectra. This data set is used to demonstrate the ability to classify simultaneously with different data preprocessing strategies, thereby reducing the need to determine the ideal preprocessing method, a common prerequisite for accurate classification. The third data set contains three wine cultivars for three classes measured at 13 unique chemical and physical variables. In all cases, fusion of nonoptimized classifiers improves classification. Also presented are atypical uses of Procrustes analysis and extended inverted signal correction (EISC) for distinguishing sample similarities to respective classes.

  11. TPH detection in groundwater: Identification and elimination of positive interferences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemo, D.A.; Synowiec, K.A.

    1996-01-01

    Groundwater assessment programs frequently require total petroleum hydrocarbon (TPH) analyses (Methods 8015M and 418.1). TPH analyses are often unreliable indicators of water quality because these methods are not constituent-specific and are vulnerable to significant sources of positive interferences. These positive interferences include: (a) non-dissolved petroleum constituents; (b) soluble, non-petroleum hydrocarbons (e.g., biodegradation products); and (c) turbidity, commonly introduced into water samples during sample collection. In this paper, we show that the portion of a TPH concentration not directly the result of water-soluble petroleum constituents can be attributed solely to these positive interferences. To demonstrate the impact of these interferences, wemore » conducted a field experiment at a site affected by degraded crude oil. Although TPH was consistently detected in groundwater samples, BTEX was not detected. PNAs were not detected, except for very low concentrations of fluorene (<5 ug/1). Filtering and silica gel cleanup steps were added to sampling and analyses to remove particulates and biogenic by-products. Results showed that filtering lowered the Method 8015M concentrations and reduced the Method 418.1 concentrations to non-detectable. Silica gel cleanup reduced the Method 8015M concentrations to non-detectable. We conclude from this study that the TPH results from groundwater samples are artifacts of positive interferences caused by both particulates and biogenic materials and do not represent dissolved-phase petroleum constituents.« less

  12. TPH detection in groundwater: Identification and elimination of positive interferences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zemo, D.A.; Synowiec, K.A.

    1996-12-31

    Groundwater assessment programs frequently require total petroleum hydrocarbon (TPH) analyses (Methods 8015M and 418.1). TPH analyses are often unreliable indicators of water quality because these methods are not constituent-specific and are vulnerable to significant sources of positive interferences. These positive interferences include: (a) non-dissolved petroleum constituents; (b) soluble, non-petroleum hydrocarbons (e.g., biodegradation products); and (c) turbidity, commonly introduced into water samples during sample collection. In this paper, we show that the portion of a TPH concentration not directly the result of water-soluble petroleum constituents can be attributed solely to these positive interferences. To demonstrate the impact of these interferences, wemore » conducted a field experiment at a site affected by degraded crude oil. Although TPH was consistently detected in groundwater samples, BTEX was not detected. PNAs were not detected, except for very low concentrations of fluorene (<5 ug/1). Filtering and silica gel cleanup steps were added to sampling and analyses to remove particulates and biogenic by-products. Results showed that filtering lowered the Method 8015M concentrations and reduced the Method 418.1 concentrations to non-detectable. Silica gel cleanup reduced the Method 8015M concentrations to non-detectable. We conclude from this study that the TPH results from groundwater samples are artifacts of positive interferences caused by both particulates and biogenic materials and do not represent dissolved-phase petroleum constituents.« less

  13. Method of making an improved superconducting quantum interference device

    DOEpatents

    Wu, Cheng-Teh; Falco, Charles M.; Kampwirth, Robert T.

    1977-01-01

    An improved superconducting quantum interference device is made by sputtering a thin film of an alloy of three parts niobium to one part tin in a pattern comprising a closed loop with a narrow region, depositing a thin film of a radiation shield such as copper over the niobium-tin, scribing a narrow line in the copper over the narrow region, exposing the structure at the scribed line to radiation and removing the deposited copper.

  14. A Robust Real Time Direction-of-Arrival Estimation Method for Sequential Movement Events of Vehicles.

    PubMed

    Liu, Huawei; Li, Baoqing; Yuan, Xiaobing; Zhou, Qianwei; Huang, Jingchang

    2018-03-27

    Parameters estimation of sequential movement events of vehicles is facing the challenges of noise interferences and the demands of portable implementation. In this paper, we propose a robust direction-of-arrival (DOA) estimation method for the sequential movement events of vehicles based on a small Micro-Electro-Mechanical System (MEMS) microphone array system. Inspired by the incoherent signal-subspace method (ISM), the method that is proposed in this work employs multiple sub-bands, which are selected from the wideband signals with high magnitude-squared coherence to track moving vehicles in the presence of wind noise. The field test results demonstrate that the proposed method has a better performance in emulating the DOA of a moving vehicle even in the case of severe wind interference than the narrowband multiple signal classification (MUSIC) method, the sub-band DOA estimation method, and the classical two-sided correlation transformation (TCT) method.

  15. Application of classification methods for mapping Mercury's surface composition: analysis on Rudaki's Area

    NASA Astrophysics Data System (ADS)

    Zambon, F.; De Sanctis, M. C.; Capaccioni, F.; Filacchione, G.; Carli, C.; Ammanito, E.; Friggeri, A.

    2011-10-01

    During the first two MESSENGER flybys (14th January 2008 and 6th October 2008) the Mercury Dual Imaging System (MDIS) has extended the coverage of the Mercury surface, obtained by Mariner 10 and now we have images of about 90% of the Mercury surface [1]. MDIS is equipped with a Narrow Angle Camera (NAC) and a Wide Angle Camera (WAC). The NAC uses an off-axis reflective design with a 1.5° field of view (FOV) centered at 747 nm. The WAC has a re- fractive design with a 10.5° FOV and 12-position filters that cover a 395-1040 nm spectral range [2]. The color images can be used to infer information on the surface composition and classification meth- ods are an interesting technique for multispectral image analysis which can be applied to the study of the planetary surfaces. Classification methods are based on clustering algorithms and they can be divided in two categories: unsupervised and supervised. The unsupervised classifiers do not require the analyst feedback, and the algorithm automatically organizes pixels values into classes. In the supervised method, instead, the analyst must choose the "training area" that define the pixels value of a given class [3]. Here we will describe the classification in different compositional units of the region near the Rudaki Crater on Mercury.

  16. Optimized Extraction Method To Remove Humic Acid Interferences from Soil Samples Prior to Microbial Proteome Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qian, Chen; Hettich, Robert L.

    The microbial composition and their activities in soil environments play a critical role in organic matter transformation and nutrient cycling, perhaps most specifically with respect to impact on plant growth but also more broadly to global impact on carbon and nitrogen-cycling. Liquid chromatography coupled to high performance mass spectrometry provides a powerful approach to characterize soil microbiomes; however, the limited microbial biomass and the presence of abundant interferences in soil samples present major challenges to soil proteome extraction and subsequent MS measurement. To address some of the major issues, we have designed and optimized an experimental method to enhance microbialmore » proteome extraction concomitant with minimizing the soil-borne humic substances co-extraction from soils. Among the range of interferences, humic substances are often the worst in terms of adversely impacting proteome extraction and mass spectrometry measurement. Our approach employs an in-situ detergent-based microbial lysis / TCA precipitation coupled with an additional acidification precipitation step at the peptide level which efficiently removes humic acids. By combing filtration and pH adjustment of the final peptide solution, the remaining humic acids can be differentially precipitated and removed with a membrane filter, thereby leaving much cleaner proteolytic peptide samples for MS measurement. As a result, this modified method is a reliable and straight-forward protein extraction method that efficiently removes soil-borne humic substances without inducing proteome sample loss or reducing or biasing protein identification in mass spectrometry.« less

  17. Optimized Extraction Method To Remove Humic Acid Interferences from Soil Samples Prior to Microbial Proteome Measurements

    DOE PAGES

    Qian, Chen; Hettich, Robert L.

    2017-05-24

    The microbial composition and their activities in soil environments play a critical role in organic matter transformation and nutrient cycling, perhaps most specifically with respect to impact on plant growth but also more broadly to global impact on carbon and nitrogen-cycling. Liquid chromatography coupled to high performance mass spectrometry provides a powerful approach to characterize soil microbiomes; however, the limited microbial biomass and the presence of abundant interferences in soil samples present major challenges to soil proteome extraction and subsequent MS measurement. To address some of the major issues, we have designed and optimized an experimental method to enhance microbialmore » proteome extraction concomitant with minimizing the soil-borne humic substances co-extraction from soils. Among the range of interferences, humic substances are often the worst in terms of adversely impacting proteome extraction and mass spectrometry measurement. Our approach employs an in-situ detergent-based microbial lysis / TCA precipitation coupled with an additional acidification precipitation step at the peptide level which efficiently removes humic acids. By combing filtration and pH adjustment of the final peptide solution, the remaining humic acids can be differentially precipitated and removed with a membrane filter, thereby leaving much cleaner proteolytic peptide samples for MS measurement. As a result, this modified method is a reliable and straight-forward protein extraction method that efficiently removes soil-borne humic substances without inducing proteome sample loss or reducing or biasing protein identification in mass spectrometry.« less

  18. Mechanics of the tapered interference fit in dental implants.

    PubMed

    Bozkaya, Dinçer; Müftü, Sinan

    2003-11-01

    In evaluation of the long-term success of a dental implant, the reliability and the stability of the implant-abutment interface plays a great role. Tapered interference fits provide a reliable connection method between the abutment and the implant. In this work, the mechanics of the tapered interference fits were analyzed using a closed-form formula and the finite element (FE) method. An analytical solution, which is used to predict the contact pressure in a straight interference, was modified to predict the contact pressure in the tapered implant-abutment interface. Elastic-plastic FE analysis was used to simulate the implant and abutment material behavior. The validity and the applicability of the analytical solution were investigated by comparisons with the FE model for a range of problem parameters. It was shown that the analytical solution could be used to determine the pull-out force and loosening-torque with 5-10% error. Detailed analysis of the stress distribution due to tapered interference fit, in a commercially available, abutment-implant system was carried out. This analysis shows that plastic deformation in the implant limits the increase in the pull-out force that would have been otherwise predicted by higher interference values.

  19. Project implementation : classification of organic soils and classification of marls - training of INDOT personnel.

    DOT National Transportation Integrated Search

    2012-09-01

    This is an implementation project for the research completed as part of the following projects: SPR3005 Classification of Organic Soils : and SPR3227 Classification of Marl Soils. The methods developed for the classification of both soi...

  20. Genetic Bee Colony (GBC) algorithm: A new gene selection method for microarray cancer classification.

    PubMed

    Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A

    2015-06-01

    Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Comparison of two Classification methods (MLC and SVM) to extract land use and land cover in Johor Malaysia

    NASA Astrophysics Data System (ADS)

    Rokni Deilmai, B.; Ahmad, B. Bin; Zabihi, H.

    2014-06-01

    Mapping is essential for the analysis of the land use and land cover, which influence many environmental processes and properties. For the purpose of the creation of land cover maps, it is important to minimize error. These errors will propagate into later analyses based on these land cover maps. The reliability of land cover maps derived from remotely sensed data depends on an accurate classification. In this study, we have analyzed multispectral data using two different classifiers including Maximum Likelihood Classifier (MLC) and Support Vector Machine (SVM). To pursue this aim, Landsat Thematic Mapper data and identical field-based training sample datasets in Johor Malaysia used for each classification method, which results indicate in five land cover classes forest, oil palm, urban area, water, rubber. Classification results indicate that SVM was more accurate than MLC. With demonstrated capability to produce reliable cover results, the SVM methods should be especially useful for land cover classification.

  2. [Correlation coefficient-based principle and method for the classification of jump degree in hydrological time series].

    PubMed

    Wu, Zi Yi; Xie, Ping; Sang, Yan Fang; Gu, Hai Ting

    2018-04-01

    The phenomenon of jump is one of the importantly external forms of hydrological variabi-lity under environmental changes, representing the adaption of hydrological nonlinear systems to the influence of external disturbances. Presently, the related studies mainly focus on the methods for identifying the jump positions and jump times in hydrological time series. In contrast, few studies have focused on the quantitative description and classification of jump degree in hydrological time series, which make it difficult to understand the environmental changes and evaluate its potential impacts. Here, we proposed a theatrically reliable and easy-to-apply method for the classification of jump degree in hydrological time series, using the correlation coefficient as a basic index. The statistical tests verified the accuracy, reasonability, and applicability of this method. The relationship between the correlation coefficient and the jump degree of series were described using mathematical equation by derivation. After that, several thresholds of correlation coefficients under different statistical significance levels were chosen, based on which the jump degree could be classified into five levels: no, weak, moderate, strong and very strong. Finally, our method was applied to five diffe-rent observed hydrological time series, with diverse geographic and hydrological conditions in China. The results of the classification of jump degrees in those series were closely accorded with their physically hydrological mechanisms, indicating the practicability of our method.

  3. Qualitative properties of roasting defect beans and development of its classification methods by hyperspectral imaging technology.

    PubMed

    Cho, Jeong-Seok; Bae, Hyung-Jin; Cho, Byoung-Kwan; Moon, Kwang-Deog

    2017-04-01

    Qualitative properties of roasting defect coffee beans and their classification methods were studied using hyperspectral imaging (HSI). The roasting defect beans were divided into 5 groups: medium roasting (Cont), under developed (RD-1), over roasting (RD-2), interior under developed (RD-3), and interior scorching (RD-4). The following qualitative properties were assayed: browning index (BI), moisture content (MC), chlorogenic acid (CA), trigonelline (TG), and caffeine (CF) content. Their HSI spectra (1000-1700nm) were also analysed to develop the classification methods of roasting defect beans. RD-2 showed the highest BI and the lowest MC, CA, and TG content. The accuracy of classification model of partial least-squares discriminant was 86.2%. The most powerful wavelength to classify the defective beans was approximately 1420nm (related to OH bond). The HSI reflectance values at 1420nm showed similar tendency with MC, enabling the use of this technology to classify the roasting defect beans. Copyright © 2016. Published by Elsevier Ltd.

  4. Visible Light Image-Based Method for Sugar Content Classification of Citrus

    PubMed Central

    Wang, Xuefeng; Wu, Chunyan; Hirafuji, Masayuki

    2016-01-01

    Visible light imaging of citrus fruit from Mie Prefecture of Japan was performed to determine whether an algorithm could be developed to predict the sugar content. This nondestructive classification showed that the accurate segmentation of different images can be realized by a correlation analysis based on the threshold value of the coefficient of determination. There is an obvious correlation between the sugar content of citrus fruit and certain parameters of the color images. The selected image parameters were connected by addition algorithm. The sugar content of citrus fruit can be predicted by the dummy variable method. The results showed that the small but orange citrus fruits often have a high sugar content. The study shows that it is possible to predict the sugar content of citrus fruit and to perform a classification of the sugar content using light in the visible spectrum and without the need for an additional light source. PMID:26811935

  5. Adipose Tissue Quantification by Imaging Methods: A Proposed Classification

    PubMed Central

    Shen, Wei; Wang, ZiMian; Punyanita, Mark; Lei, Jianbo; Sinav, Ahmet; Kral, John G.; Imielinska, Celina; Ross, Robert; Heymsfield, Steven B.

    2007-01-01

    Recent advances in imaging techniques and understanding of differences in the molecular biology of adipose tissue has rendered classical anatomy obsolete, requiring a new classification of the topography of adipose tissue. Adipose tissue is one of the largest body compartments, yet a classification that defines specific adipose tissue depots based on their anatomic location and related functions is lacking. The absence of an accepted taxonomy poses problems for investigators studying adipose tissue topography and its functional correlates. The aim of this review was to critically examine the literature on imaging of whole body and regional adipose tissue and to create the first systematic classification of adipose tissue topography. Adipose tissue terminology was examined in over 100 original publications. Our analysis revealed inconsistencies in the use of specific definitions, especially for the compartment termed “visceral” adipose tissue. This analysis leads us to propose an updated classification of total body and regional adipose tissue, providing a well-defined basis for correlating imaging studies of specific adipose tissue depots with molecular processes. PMID:12529479

  6. Evaluation of image deblurring methods via a classification metric

    NASA Astrophysics Data System (ADS)

    Perrone, Daniele; Humphreys, David; Lamb, Robert A.; Favaro, Paolo

    2012-09-01

    The performance of single image deblurring algorithms is typically evaluated via a certain discrepancy measure between the reconstructed image and the ideal sharp image. The choice of metric, however, has been a source of debate and has also led to alternative metrics based on human visual perception. While fixed metrics may fail to capture some small but visible artifacts, perception-based metrics may favor reconstructions with artifacts that are visually pleasant. To overcome these limitations, we propose to assess the quality of reconstructed images via a task-driven metric. In this paper we consider object classification as the task and therefore use the rate of classification as the metric to measure deblurring performance. In our evaluation we use data with different types of blur in two cases: Optical Character Recognition (OCR), where the goal is to recognise characters in a black and white image, and object classification with no restrictions on pose, illumination and orientation. Finally, we show how off-the-shelf classification algorithms benefit from working with deblurred images.

  7. EASI - EQUILIBRIUM AIR SHOCK INTERFERENCE

    NASA Technical Reports Server (NTRS)

    Glass, C. E.

    1994-01-01

    New research on hypersonic vehicles, such as the National Aero-Space Plane (NASP), has raised concerns about the effects of shock-wave interference on various structural components of the craft. State-of-the-art aerothermal analysis software is inadequate to predict local flow and heat flux in areas of extremely high heat transfer, such as the surface impingement of an Edney-type supersonic jet. EASI revives and updates older computational methods for calculating inviscid flow field and maximum heating from shock wave interference. The program expands these methods to solve problems involving the six shock-wave interference patterns on a two-dimensional cylindrical leading edge with an equilibrium chemically reacting gas mixture (representing, for example, the scramjet cowl of the NASP). The inclusion of gas chemistry allows for a more accurate prediction of the maximum pressure and heating loads by accounting for the effects of high temperature on the air mixture. Caloric imperfections and specie dissociation of high-temperature air cause shock-wave angles, flow deflection angles, and thermodynamic properties to differ from those calculated by a calorically perfect gas model. EASI contains pressure- and temperature-dependent thermodynamic and transport properties to determine heating rates, and uses either a calorically perfect air model or an 11-specie, 7-reaction reacting air model at equilibrium with temperatures up to 15,000 K for the inviscid flowfield calculations. EASI solves the flow field and the associated maximum surface pressure and heat flux for the six common types of shock wave interference. Depending on the type of interference, the program solves for shock-wave/boundary-layer interaction, expansion-fan/boundary-layer interaction, attaching shear layer or supersonic jet impingement. Heat flux predictions require a knowledge (from experimental data or relevant calculations) of a pertinent length scale of the interaction. Output files contain flow

  8. Chemometrics-assisted spectrophotometric green method for correcting interferences in biowaiver studies: Application to assay and dissolution profiling study of donepezil hydrochloride tablets

    NASA Astrophysics Data System (ADS)

    Korany, Mohamed A.; Mahgoub, Hoda; Haggag, Rim S.; Ragab, Marwa A. A.; Elmallah, Osama A.

    2018-06-01

    A green, simple and cost effective chemometric UV-Vis spectrophotometric method has been developed and validated for correcting interferences that arise during conducting biowaiver studies. Chemometric manipulation has been done for enhancing the results of direct absorbance, resulting from very low concentrations (high incidence of background noise interference) of earlier points in the dissolution timing in case of dissolution profile using first and second derivative (D1 & D2) methods and their corresponding Fourier function convoluted methods (D1/FF& D2/FF). The method applied for biowaiver study of Donepezil Hydrochloride (DH) as a representative model was done by comparing two different dosage forms containing 5 mg DH per tablet as an application of a developed chemometric method for correcting interferences as well as for the assay and dissolution testing in its tablet dosage form. The results showed that first derivative technique can be used for enhancement of the data in case of low concentration range of DH (1-8 μg mL-1) in the three different pH dissolution media which were used to estimate the low drug concentrations dissolved at the early points in the biowaiver study. Furthermore, the results showed similarity in phosphate buffer pH 6.8 and dissimilarity in the other 2 pH media. The method was validated according to ICH guidelines and USP monograph for both assays (HCl of pH 1.2) and dissolution study in 3 pH media (HCl of pH 1.2, acetate buffer of pH 4.5 and phosphate buffer of pH 6.8). Finally, the assessment of the method greenness was done using two different assessment techniques: National Environmental Method Index label and Eco scale methods. Both techniques ascertained the greenness of the proposed method.

  9. Chemometrics-assisted spectrophotometric green method for correcting interferences in biowaiver studies: Application to assay and dissolution profiling study of donepezil hydrochloride tablets.

    PubMed

    Korany, Mohamed A; Mahgoub, Hoda; Haggag, Rim S; Ragab, Marwa A A; Elmallah, Osama A

    2018-06-15

    A green, simple and cost effective chemometric UV-Vis spectrophotometric method has been developed and validated for correcting interferences that arise during conducting biowaiver studies. Chemometric manipulation has been done for enhancing the results of direct absorbance, resulting from very low concentrations (high incidence of background noise interference) of earlier points in the dissolution timing in case of dissolution profile using first and second derivative (D1 & D2) methods and their corresponding Fourier function convoluted methods (D1/FF& D2/FF). The method applied for biowaiver study of Donepezil Hydrochloride (DH) as a representative model was done by comparing two different dosage forms containing 5mg DH per tablet as an application of a developed chemometric method for correcting interferences as well as for the assay and dissolution testing in its tablet dosage form. The results showed that first derivative technique can be used for enhancement of the data in case of low concentration range of DH (1-8μgmL -1 ) in the three different pH dissolution media which were used to estimate the low drug concentrations dissolved at the early points in the biowaiver study. Furthermore, the results showed similarity in phosphate buffer pH6.8 and dissimilarity in the other 2pH media. The method was validated according to ICH guidelines and USP monograph for both assays (HCl of pH1.2) and dissolution study in 3pH media (HCl of pH1.2, acetate buffer of pH4.5 and phosphate buffer of pH6.8). Finally, the assessment of the method greenness was done using two different assessment techniques: National Environmental Method Index label and Eco scale methods. Both techniques ascertained the greenness of the proposed method. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Adaptive limited feedback for interference alignment in MIMO interference channels.

    PubMed

    Zhang, Yang; Zhao, Chenglin; Meng, Juan; Li, Shibao; Li, Li

    2016-01-01

    It is very important that the radar sensor network has autonomous capabilities such as self-managing, etc. Quite often, MIMO interference channels are applied to radar sensor networks, and for self-managing purpose, interference management in MIMO interference channels is critical. Interference alignment (IA) has the potential to dramatically improve system throughput by effectively mitigating interference in multi-user networks at high signal-to-noise (SNR). However, the implementation of IA predominantly relays on perfect and global channel state information (CSI) at all transceivers. A large amount of CSI has to be fed back to all transmitters, resulting in a proliferation of feedback bits. Thus, IA with limited feedback has been introduced to reduce the sum feedback overhead. In this paper, by exploiting the advantage of heterogeneous path loss, we first investigate the throughput of IA with limited feedback in interference channels while each user transmits multi-streams simultaneously, then we get the upper bound of sum rate in terms of the transmit power and feedback bits. Moreover, we propose a dynamic feedback scheme via bit allocation to reduce the throughput loss due to limited feedback. Simulation results demonstrate that the dynamic feedback scheme achieves better performance in terms of sum rate.

  11. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield

    PubMed Central

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-01-01

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale. PMID:28621723

  12. A Quantile Mapping Bias Correction Method Based on Hydroclimatic Classification of the Guiana Shield.

    PubMed

    Ringard, Justine; Seyler, Frederique; Linguet, Laurent

    2017-06-16

    Satellite precipitation products (SPPs) provide alternative precipitation data for regions with sparse rain gauge measurements. However, SPPs are subject to different types of error that need correction. Most SPP bias correction methods use the statistical properties of the rain gauge data to adjust the corresponding SPP data. The statistical adjustment does not make it possible to correct the pixels of SPP data for which there is no rain gauge data. The solution proposed in this article is to correct the daily SPP data for the Guiana Shield using a novel two set approach, without taking into account the daily gauge data of the pixel to be corrected, but the daily gauge data from surrounding pixels. In this case, a spatial analysis must be involved. The first step defines hydroclimatic areas using a spatial classification that considers precipitation data with the same temporal distributions. The second step uses the Quantile Mapping bias correction method to correct the daily SPP data contained within each hydroclimatic area. We validate the results by comparing the corrected SPP data and daily rain gauge measurements using relative RMSE and relative bias statistical errors. The results show that analysis scale variation reduces rBIAS and rRMSE significantly. The spatial classification avoids mixing rainfall data with different temporal characteristics in each hydroclimatic area, and the defined bias correction parameters are more realistic and appropriate. This study demonstrates that hydroclimatic classification is relevant for implementing bias correction methods at the local scale.

  13. Using classification and NDVI differencing methods for monitoring sparse vegetation coverage: a case study of saltcedar in Nevada, USA.

    USDA-ARS?s Scientific Manuscript database

    A change detection experiment for an invasive species, saltcedar, near Lovelock, Nevada, was conducted with multi-date Compact Airborne Spectrographic Imager (CASI) hyperspectral datasets. Classification and NDVI differencing change detection methods were tested, In the classification strategy, a p...

  14. Towards a formal genealogical classification of the Lezgian languages (North Caucasus): testing various phylogenetic methods on lexical data.

    PubMed

    Kassian, Alexei

    2015-01-01

    A lexicostatistical classification is proposed for 20 languages and dialects of the Lezgian group of the North Caucasian family, based on meticulously compiled 110-item wordlists, published as part of the Global Lexicostatistical Database project. The lexical data have been subsequently analyzed with the aid of the principal phylogenetic methods, both distance-based and character-based: Starling neighbor joining (StarlingNJ), Neighbor joining (NJ), Unweighted pair group method with arithmetic mean (UPGMA), Bayesian Markov chain Monte Carlo (MCMC), Unweighted maximum parsimony (UMP). Cognation indexes within the input matrix were marked by two different algorithms: traditional etymological approach and phonetic similarity, i.e., the automatic method of consonant classes (Levenshtein distances). Due to certain reasons (first of all, high lexicographic quality of the wordlists and a consensus about the Lezgian phylogeny among Caucasologists), the Lezgian database is a perfect testing area for appraisal of phylogenetic methods. For the etymology-based input matrix, all the phylogenetic methods, with the possible exception of UMP, have yielded trees that are sufficiently compatible with each other to generate a consensus phylogenetic tree of the Lezgian lects. The obtained consensus tree agrees with the traditional expert classification as well as some of the previously proposed formal classifications of this linguistic group. Contrary to theoretical expectations, the UMP method has suggested the least plausible tree of all. In the case of the phonetic similarity-based input matrix, the distance-based methods (StarlingNJ, NJ, UPGMA) have produced the trees that are rather close to the consensus etymology-based tree and the traditional expert classification, whereas the character-based methods (Bayesian MCMC, UMP) have yielded less likely topologies.

  15. Towards a Formal Genealogical Classification of the Lezgian Languages (North Caucasus): Testing Various Phylogenetic Methods on Lexical Data

    PubMed Central

    Kassian, Alexei

    2015-01-01

    A lexicostatistical classification is proposed for 20 languages and dialects of the Lezgian group of the North Caucasian family, based on meticulously compiled 110-item wordlists, published as part of the Global Lexicostatistical Database project. The lexical data have been subsequently analyzed with the aid of the principal phylogenetic methods, both distance-based and character-based: Starling neighbor joining (StarlingNJ), Neighbor joining (NJ), Unweighted pair group method with arithmetic mean (UPGMA), Bayesian Markov chain Monte Carlo (MCMC), Unweighted maximum parsimony (UMP). Cognation indexes within the input matrix were marked by two different algorithms: traditional etymological approach and phonetic similarity, i.e., the automatic method of consonant classes (Levenshtein distances). Due to certain reasons (first of all, high lexicographic quality of the wordlists and a consensus about the Lezgian phylogeny among Caucasologists), the Lezgian database is a perfect testing area for appraisal of phylogenetic methods. For the etymology-based input matrix, all the phylogenetic methods, with the possible exception of UMP, have yielded trees that are sufficiently compatible with each other to generate a consensus phylogenetic tree of the Lezgian lects. The obtained consensus tree agrees with the traditional expert classification as well as some of the previously proposed formal classifications of this linguistic group. Contrary to theoretical expectations, the UMP method has suggested the least plausible tree of all. In the case of the phonetic similarity-based input matrix, the distance-based methods (StarlingNJ, NJ, UPGMA) have produced the trees that are rather close to the consensus etymology-based tree and the traditional expert classification, whereas the character-based methods (Bayesian MCMC, UMP) have yielded less likely topologies. PMID:25719456

  16. Neural mechanisms of interference control in working memory: effects of interference expectancy and fluid intelligence.

    PubMed

    Burgess, Gregory C; Braver, Todd S

    2010-09-20

    A critical aspect of executive control is the ability to limit the adverse effects of interference. Previous studies have shown activation of left ventrolateral prefrontal cortex after the onset of interference, suggesting that interference may be resolved in a reactive manner. However, we suggest that interference control may also operate in a proactive manner to prevent effects of interference. The current study investigated the temporal dynamics of interference control by varying two factors - interference expectancy and fluid intelligence (gF) - that could influence whether interference control operates proactively versus reactively. A modified version of the recent negatives task was utilized. Interference expectancy was manipulated across task blocks by changing the proportion of recent negative (interference) trials versus recent positive (facilitation) trials. Furthermore, we explored whether gF affected the tendency to utilize specific interference control mechanisms. When interference expectancy was low, activity in lateral prefrontal cortex replicated prior results showing a reactive control pattern (i.e., interference-sensitivity during probe period). In contrast, when interference expectancy was high, bilateral prefrontal cortex activation was more indicative of proactive control mechanisms (interference-related effects prior to the probe period). Additional results suggested that the proactive control pattern was more evident in high gF individuals, whereas the reactive control pattern was more evident in low gF individuals. The results suggest the presence of two neural mechanisms of interference control, with the differential expression of these mechanisms modulated by both experimental (e.g., expectancy effects) and individual difference (e.g., gF) factors.

  17. Automated Method of Frequency Determination in Software Metric Data Through the Use of the Multiple Signal Classification (MUSIC) Algorithm

    DTIC Science & Technology

    1998-06-26

    METHOD OF FREQUENCY DETERMINATION 4 IN SOFTWARE METRIC DATA THROUGH THE USE OF THE 5 MULTIPLE SIGNAL CLASSIFICATION ( MUSIC ) ALGORITHM 6 7 STATEMENT OF...graph showing the estimated power spectral 12 density (PSD) generated by the multiple signal classification 13 ( MUSIC ) algorithm from the data set used...implemented in this module; however, it is preferred to use 1 the Multiple Signal Classification ( MUSIC ) algorithm. The MUSIC 2 algorithm is

  18. Automated Decision Tree Classification of Corneal Shape

    PubMed Central

    Twa, Michael D.; Parthasarathy, Srinivasan; Roberts, Cynthia; Mahmoud, Ashraf M.; Raasch, Thomas W.; Bullimore, Mark A.

    2011-01-01

    Purpose The volume and complexity of data produced during videokeratography examinations present a challenge of interpretation. As a consequence, results are often analyzed qualitatively by subjective pattern recognition or reduced to comparisons of summary indices. We describe the application of decision tree induction, an automated machine learning classification method, to discriminate between normal and keratoconic corneal shapes in an objective and quantitative way. We then compared this method with other known classification methods. Methods The corneal surface was modeled with a seventh-order Zernike polynomial for 132 normal eyes of 92 subjects and 112 eyes of 71 subjects diagnosed with keratoconus. A decision tree classifier was induced using the C4.5 algorithm, and its classification performance was compared with the modified Rabinowitz–McDonnell index, Schwiegerling’s Z3 index (Z3), Keratoconus Prediction Index (KPI), KISA%, and Cone Location and Magnitude Index using recommended classification thresholds for each method. We also evaluated the area under the receiver operator characteristic (ROC) curve for each classification method. Results Our decision tree classifier performed equal to or better than the other classifiers tested: accuracy was 92% and the area under the ROC curve was 0.97. Our decision tree classifier reduced the information needed to distinguish between normal and keratoconus eyes using four of 36 Zernike polynomial coefficients. The four surface features selected as classification attributes by the decision tree method were inferior elevation, greater sagittal depth, oblique toricity, and trefoil. Conclusions Automated decision tree classification of corneal shape through Zernike polynomials is an accurate quantitative method of classification that is interpretable and can be generated from any instrument platform capable of raw elevation data output. This method of pattern classification is extendable to other classification

  19. A new classification method for MALDI imaging mass spectrometry data acquired on formalin-fixed paraffin-embedded tissue samples.

    PubMed

    Boskamp, Tobias; Lachmund, Delf; Oetjen, Janina; Cordero Hernandez, Yovany; Trede, Dennis; Maass, Peter; Casadonte, Rita; Kriegsmann, Jörg; Warth, Arne; Dienemann, Hendrik; Weichert, Wilko; Kriegsmann, Mark

    2017-07-01

    Matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI IMS) shows a high potential for applications in histopathological diagnosis, and in particular for supporting tumor typing and subtyping. The development of such applications requires the extraction of spectral fingerprints that are relevant for the given tissue and the identification of biomarkers associated with these spectral patterns. We propose a novel data analysis method based on the extraction of characteristic spectral patterns (CSPs) that allow automated generation of classification models for spectral data. Formalin-fixed paraffin embedded (FFPE) tissue samples from N=445 patients assembled on 12 tissue microarrays were analyzed. The method was applied to discriminate primary lung and pancreatic cancer, as well as adenocarcinoma and squamous cell carcinoma of the lung. A classification accuracy of 100% and 82.8%, resp., could be achieved on core level, assessed by cross-validation. The method outperformed the more conventional classification method based on the extraction of individual m/z values in the first application, while achieving a comparable accuracy in the second. LC-MS/MS peptide identification demonstrated that the spectral features present in selected CSPs correspond to peptides relevant for the respective classification. This article is part of a Special Issue entitled: MALDI Imaging, edited by Dr. Corinna Henkel and Prof. Peter Hoffmann. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Interference-free ultrasound imaging during HIFU therapy, using software tools

    NASA Technical Reports Server (NTRS)

    Vaezy, Shahram (Inventor); Held, Robert (Inventor); Sikdar, Siddhartha (Inventor); Managuli, Ravi (Inventor); Zderic, Vesna (Inventor)

    2010-01-01

    Disclosed herein is a method for obtaining a composite interference-free ultrasound image when non-imaging ultrasound waves would otherwise interfere with ultrasound imaging. A conventional ultrasound imaging system is used to collect frames of ultrasound image data in the presence of non-imaging ultrasound waves, such as high-intensity focused ultrasound (HIFU). The frames are directed to a processor that analyzes the frames to identify portions of the frame that are interference-free. Interference-free portions of a plurality of different ultrasound image frames are combined to generate a single composite interference-free ultrasound image that is displayed to a user. In this approach, a frequency of the non-imaging ultrasound waves is offset relative to a frequency of the ultrasound imaging waves, such that the interference introduced by the non-imaging ultrasound waves appears in a different portion of the frames.

  1. Laser Raman detection for oral cancer based on an adaptive Gaussian process classification method with posterior probabilities

    NASA Astrophysics Data System (ADS)

    Du, Zhanwei; Yang, Yongjian; Bai, Yuan; Wang, Lijun; Su, Le; Chen, Yong; Li, Xianchang; Zhou, Xiaodong; Jia, Jun; Shen, Aiguo; Hu, Jiming

    2013-03-01

    The existing methods for early and differential diagnosis of oral cancer are limited due to the unapparent early symptoms and the imperfect imaging examination methods. In this paper, the classification models of oral adenocarcinoma, carcinoma tissues and a control group with just four features are established by utilizing the hybrid Gaussian process (HGP) classification algorithm, with the introduction of the mechanisms of noise reduction and posterior probability. HGP shows much better performance in the experimental results. During the experimental process, oral tissues were divided into three groups, adenocarcinoma (n = 87), carcinoma (n = 100) and the control group (n = 134). The spectral data for these groups were collected. The prospective application of the proposed HGP classification method improved the diagnostic sensitivity to 56.35% and the specificity to about 70.00%, and resulted in a Matthews correlation coefficient (MCC) of 0.36. It is proved that the utilization of HGP in LRS detection analysis for the diagnosis of oral cancer gives accurate results. The prospect of application is also satisfactory.

  2. Ensemble Sparse Classification of Alzheimer’s Disease

    PubMed Central

    Liu, Manhua; Zhang, Daoqiang; Shen, Dinggang

    2012-01-01

    The high-dimensional pattern classification methods, e.g., support vector machines (SVM), have been widely investigated for analysis of structural and functional brain images (such as magnetic resonance imaging (MRI)) to assist the diagnosis of Alzheimer’s disease (AD) including its prodromal stage, i.e., mild cognitive impairment (MCI). Most existing classification methods extract features from neuroimaging data and then construct a single classifier to perform classification. However, due to noise and small sample size of neuroimaging data, it is challenging to train only a global classifier that can be robust enough to achieve good classification performance. In this paper, instead of building a single global classifier, we propose a local patch-based subspace ensemble method which builds multiple individual classifiers based on different subsets of local patches and then combines them for more accurate and robust classification. Specifically, to capture the local spatial consistency, each brain image is partitioned into a number of local patches and a subset of patches is randomly selected from the patch pool to build a weak classifier. Here, the sparse representation-based classification (SRC) method, which has shown effective for classification of image data (e.g., face), is used to construct each weak classifier. Then, multiple weak classifiers are combined to make the final decision. We evaluate our method on 652 subjects (including 198 AD patients, 225 MCI and 229 normal controls) from Alzheimer’s Disease Neuroimaging Initiative (ADNI) database using MR images. The experimental results show that our method achieves an accuracy of 90.8% and an area under the ROC curve (AUC) of 94.86% for AD classification and an accuracy of 87.85% and an AUC of 92.90% for MCI classification, respectively, demonstrating a very promising performance of our method compared with the state-of-the-art methods for AD/MCI classification using MR images. PMID:22270352

  3. Comparison of Unsupervised Vegetation Classification Methods from Vhr Images after Shadows Removal by Innovative Algorithms

    NASA Astrophysics Data System (ADS)

    Movia, A.; Beinat, A.; Crosilla, F.

    2015-04-01

    The recognition of vegetation by the analysis of very high resolution (VHR) aerial images provides meaningful information about environmental features; nevertheless, VHR images frequently contain shadows that generate significant problems for the classification of the image components and for the extraction of the needed information. The aim of this research is to classify, from VHR aerial images, vegetation involved in the balance process of the environmental biochemical cycle, and to discriminate it with respect to urban and agricultural features. Three classification algorithms have been experimented in order to better recognize vegetation, and compared to NDVI index; unfortunately all these methods are conditioned by the presence of shadows on the images. Literature presents several algorithms to detect and remove shadows in the scene: most of them are based on the RGB to HSI transformations. In this work some of them have been implemented and compared with one based on RGB bands. Successively, in order to remove shadows and restore brightness on the images, some innovative algorithms, based on Procrustes theory, have been implemented and applied. Among these, we evaluate the capability of the so called "not-centered oblique Procrustes" and "anisotropic Procrustes" methods to efficiently restore brightness with respect to a linear correlation correction based on the Cholesky decomposition. Some experimental results obtained by different classification methods after shadows removal carried out with the innovative algorithms are presented and discussed.

  4. [A research on real-time ventricular QRS classification methods for single-chip-microcomputers].

    PubMed

    Peng, L; Yang, Z; Li, L; Chen, H; Chen, E; Lin, J

    1997-05-01

    Ventricular QRS classification is key technique of ventricular arrhythmias detection in single-chip-microcomputer based dynamic electrocardiogram real-time analyser. This paper adopts morphological feature vector including QRS amplitude, interval information to reveal QRS morphology. After studying the distribution of QRS morphology feature vector of MIT/BIH DB ventricular arrhythmia files, we use morphological feature vector cluster to classify multi-morphology QRS. Based on the method, morphological feature parameters changing method which is suitable to catch occasional ventricular arrhythmias is presented. Clinical experiments verify missed ventricular arrhythmia is less than 1% by this method.

  5. A Model-Free Machine Learning Method for Risk Classification and Survival Probability Prediction.

    PubMed

    Geng, Yuan; Lu, Wenbin; Zhang, Hao Helen

    2014-01-01

    Risk classification and survival probability prediction are two major goals in survival data analysis since they play an important role in patients' risk stratification, long-term diagnosis, and treatment selection. In this article, we propose a new model-free machine learning framework for risk classification and survival probability prediction based on weighted support vector machines. The new procedure does not require any specific parametric or semiparametric model assumption on data, and is therefore capable of capturing nonlinear covariate effects. We use numerous simulation examples to demonstrate finite sample performance of the proposed method under various settings. Applications to a glioma tumor data and a breast cancer gene expression survival data are shown to illustrate the new methodology in real data analysis.

  6. An Adaptive S-Method to Analyze Micro-Doppler Signals for Human Activity Classification

    PubMed Central

    Yang, Chao; Xia, Yuqing; Ma, Xiaolin; Zhang, Tao; Zhou, Zhou

    2017-01-01

    In this paper, we propose the multiwindow Adaptive S-method (AS-method) distribution approach used in the time-frequency analysis for radar signals. Based on the results of orthogonal Hermite functions that have good time-frequency resolution, we vary the length of window to suppress the oscillating component caused by cross-terms. This method can bring a better compromise in the auto-terms concentration and cross-terms suppressing, which contributes to the multi-component signal separation. Finally, the effective micro signal is extracted by threshold segmentation and envelope extraction. To verify the proposed method, six states of motion are separated by a classifier of a support vector machine (SVM) trained to the extracted features. The trained SVM can detect a human subject with an accuracy of 95.4% for two cases without interference. PMID:29186075

  7. An Adaptive S-Method to Analyze Micro-Doppler Signals for Human Activity Classification.

    PubMed

    Li, Fangmin; Yang, Chao; Xia, Yuqing; Ma, Xiaolin; Zhang, Tao; Zhou, Zhou

    2017-11-29

    In this paper, we propose the multiwindow Adaptive S-method (AS-method) distribution approach used in the time-frequency analysis for radar signals. Based on the results of orthogonal Hermite functions that have good time-frequency resolution, we vary the length of window to suppress the oscillating component caused by cross-terms. This method can bring a better compromise in the auto-terms concentration and cross-terms suppressing, which contributes to the multi-component signal separation. Finally, the effective micro signal is extracted by threshold segmentation and envelope extraction. To verify the proposed method, six states of motion are separated by a classifier of a support vector machine (SVM) trained to the extracted features. The trained SVM can detect a human subject with an accuracy of 95.4% for two cases without interference.

  8. Interference Fit Life Factors for Roller Bearings

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.; Zaretsky, Erwin V.; Poplawski, Joseph V.

    2008-01-01

    The effect of hoop stresses in reducing cylindrical roller bearing fatigue life was determined for various classes of inner ring interference fit. Calculations were performed for up to seven interference fit classes for each of ten bearing sizes. Each fit was taken at tightest, average and loosest values within the fit class for RBEC-5 tolerance, thus requiring 486 separate analyses. The hoop stresses were superimposed on the Hertzian principal stresses created by the applied radial load to calculate roller bearing fatigue life. The method was developed through a series of equations to calculate the life reduction for cylindrical roller bearings based on interference fit. All calculated lives are for zero initial bearing internal clearance. Any reduction in bearing clearance due to interference fit was compensated by increasing the initial (unmounted) clearance. Results are presented as tables and charts of life factors for bearings with light, moderate and heavy loads and interference fits ranging from extremely light to extremely heavy and for bearing accuracy class RBEC 5 (ISO class 5). Interference fits on the inner bearing ring of a cylindrical roller bearing can significantly reduce bearing fatigue life. In general, life factors are smaller (lower life) for bearings running under light load where the unfactored life is highest. The various bearing series within a particular bore size had almost identical interference fit life factors for a particular fit. The tightest fit at the high end of the RBEC-5 tolerance band defined in ANSI/ABMA shaft fit tables produces a life factor of approximately 0.40 for an inner-race maximum Hertz stress of 1200 MPa (175 ksi) and a life factor of 0.60 for an inner-race maximum Hertz stress of 2200 MPa (320 ksi). Interference fits also impact the maximum Hertz stress-life relation.

  9. 7 CFR 28.35 - Method of classification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER... official cotton standards of the United States in effect at the time of classification. ...

  10. [Classification of Children with Attention-Deficit/Hyperactivity Disorder and Typically Developing Children Based on Electroencephalogram Principal Component Analysis and k-Nearest Neighbor].

    PubMed

    Yang, Jiaojiao; Guo, Qian; Li, Wenjie; Wang, Suhong; Zou, Ling

    2016-04-01

    This paper aims to assist the individual clinical diagnosis of children with attention-deficit/hyperactivity disorder using electroencephalogram signal detection method.Firstly,in our experiments,we obtained and studied the electroencephalogram signals from fourteen attention-deficit/hyperactivity disorder children and sixteen typically developing children during the classic interference control task of Simon-spatial Stroop,and we completed electroencephalogram data preprocessing including filtering,segmentation,removal of artifacts and so on.Secondly,we selected the subset electroencephalogram electrodes using principal component analysis(PCA)method,and we collected the common channels of the optimal electrodes which occurrence rates were more than 90%in each kind of stimulation.We then extracted the latency(200~450ms)mean amplitude features of the common electrodes.Finally,we used the k-nearest neighbor(KNN)classifier based on Euclidean distance and the support vector machine(SVM)classifier based on radial basis kernel function to classify.From the experiment,at the same kind of interference control task,the attention-deficit/hyperactivity disorder children showed lower correct response rates and longer reaction time.The N2 emerged in prefrontal cortex while P2 presented in the inferior parietal area when all kinds of stimuli demonstrated.Meanwhile,the children with attention-deficit/hyperactivity disorder exhibited markedly reduced N2 and P2amplitude compared to typically developing children.KNN resulted in better classification accuracy than SVM classifier,and the best classification rate was 89.29%in StI task.The results showed that the electroencephalogram signals were different in the brain regions of prefrontal cortex and inferior parietal cortex between attention-deficit/hyperactivity disorder and typically developing children during the interference control task,which provided a scientific basis for the clinical diagnosis of attention

  11. Application of the correlation constrained multivariate curve resolution alternating least-squares method for analyte quantitation in the presence of unexpected interferences using first-order instrumental data.

    PubMed

    Goicoechea, Héctor C; Olivieri, Alejandro C; Tauler, Romà

    2010-03-01

    Correlation constrained multivariate curve resolution-alternating least-squares is shown to be a feasible method for processing first-order instrumental data and achieve analyte quantitation in the presence of unexpected interferences. Both for simulated and experimental data sets, the proposed method could correctly retrieve the analyte and interference spectral profiles and perform accurate estimations of analyte concentrations in test samples. Since no information concerning the interferences was present in calibration samples, the proposed multivariate calibration approach including the correlation constraint facilitates the achievement of the so-called second-order advantage for the analyte of interest, which is known to be present for more complex higher-order richer instrumental data. The proposed method is tested using a simulated data set and two experimental data systems, one for the determination of ascorbic acid in powder juices using UV-visible absorption spectral data, and another for the determination of tetracycline in serum samples using fluorescence emission spectroscopy.

  12. A Comparison of Synoptic Classification Methods for Application to Wind Power Prediction

    NASA Astrophysics Data System (ADS)

    Fowler, P.; Basu, S.

    2008-12-01

    Wind energy is a highly variable resource. To make it competitive with other sources of energy for integration on the power grid, at the very least, a day-ahead forecast of power output must be available. In many grid operations worldwide, next-day power output is scheduled in 30 minute intervals and grid management routinely occurs at real time. Maintenance and repairs require costly time to complete and must be scheduled along with normal operations. Revenue is dependent on the reliability of the entire system. In other words, there is financial and managerial benefit to short-term prediction of wind power. One approach to short-term forecasting is to combine a data centric method such as an artificial neural network with a physically based approach like numerical weather prediction (NWP). The key is in associating high-dimensional NWP model output with the most appropriately trained neural network. Because neural networks perform the best in the situations they are designed for, one can hypothesize that if one can identify similar recurring states in historical weather data, this data can be used to train multiple custom designed neural networks to be used when called upon by numerical prediction. Identifying similar recurring states may offer insight to how a neural network forecast can be improved, but amassing the knowledge and utilizing it efficiently in the time required for power prediction would be difficult for a human to master, thus showing the advantage of classification. Classification methods are important tools for short-term forecasting because they can be unsupervised, objective, and computationally quick. They primarily involve categorizing data sets in to dominant weather classes, but there are numerous ways to define a class and a great variety in interpretation of the results. In the present study a collection of classification methods are used on a sampling of atmospheric variables from the North American Regional Reanalysis data set. The

  13. Benchmark of Machine Learning Methods for Classification of a SENTINEL-2 Image

    NASA Astrophysics Data System (ADS)

    Pirotti, F.; Sunar, F.; Piragnolo, M.

    2016-06-01

    Thanks to mainly ESA and USGS, a large bulk of free images of the Earth is readily available nowadays. One of the main goals of remote sensing is to label images according to a set of semantic categories, i.e. image classification. This is a very challenging issue since land cover of a specific class may present a large spatial and spectral variability and objects may appear at different scales and orientations. In this study, we report the results of benchmarking 9 machine learning algorithms tested for accuracy and speed in training and classification of land-cover classes in a Sentinel-2 dataset. The following machine learning methods (MLM) have been tested: linear discriminant analysis, k-nearest neighbour, random forests, support vector machines, multi layered perceptron, multi layered perceptron ensemble, ctree, boosting, logarithmic regression. The validation is carried out using a control dataset which consists of an independent classification in 11 land-cover classes of an area about 60 km2, obtained by manual visual interpretation of high resolution images (20 cm ground sampling distance) by experts. In this study five out of the eleven classes are used since the others have too few samples (pixels) for testing and validating subsets. The classes used are the following: (i) urban (ii) sowable areas (iii) water (iv) tree plantations (v) grasslands. Validation is carried out using three different approaches: (i) using pixels from the training dataset (train), (ii) using pixels from the training dataset and applying cross-validation with the k-fold method (kfold) and (iii) using all pixels from the control dataset. Five accuracy indices are calculated for the comparison between the values predicted with each model and control values over three sets of data: the training dataset (train), the whole control dataset (full) and with k-fold cross-validation (kfold) with ten folds. Results from validation of predictions of the whole dataset (full) show the random

  14. Comparing writing style feature-based classification methods for estimating user reputations in social media.

    PubMed

    Suh, Jong Hwan

    2016-01-01

    In recent years, the anonymous nature of the Internet has made it difficult to detect manipulated user reputations in social media, as well as to ensure the qualities of users and their posts. To deal with this, this study designs and examines an automatic approach that adopts writing style features to estimate user reputations in social media. Under varying ways of defining Good and Bad classes of user reputations based on the collected data, it evaluates the classification performance of the state-of-art methods: four writing style features, i.e. lexical, syntactic, structural, and content-specific, and eight classification techniques, i.e. four base learners-C4.5, Neural Network (NN), Support Vector Machine (SVM), and Naïve Bayes (NB)-and four Random Subspace (RS) ensemble methods based on the four base learners. When South Korea's Web forum, Daum Agora, was selected as a test bed, the experimental results show that the configuration of the full feature set containing content-specific features and RS-SVM combining RS and SVM gives the best accuracy for classification if the test bed poster reputations are segmented strictly into Good and Bad classes by portfolio approach. Pairwise t tests on accuracy confirm two expectations coming from the literature reviews: first, the feature set adding content-specific features outperform the others; second, ensemble learning methods are more viable than base learners. Moreover, among the four ways on defining the classes of user reputations, i.e. like, dislike, sum, and portfolio, the results show that the portfolio approach gives the highest accuracy.

  15. A Dimensionally Aligned Signal Projection for Classification of Unintended Radiated Emissions

    DOE PAGES

    Vann, Jason Michael; Karnowski, Thomas P.; Kerekes, Ryan; ...

    2017-04-24

    Characterization of unintended radiated emissions (URE) from electronic devices plays an important role in many research areas from electromagnetic interference to nonintrusive load monitoring to information system security. URE can provide insights for applications ranging from load disaggregation and energy efficiency to condition-based maintenance of equipment-based upon detected fault conditions. URE characterization often requires subject matter expertise to tailor transforms and feature extractors for the specific electrical devices of interest. We present a novel approach, named dimensionally aligned signal projection (DASP), for projecting aligned signal characteristics that are inherent to the physical implementation of many commercial electronic devices. These projectionsmore » minimize the need for an intimate understanding of the underlying physical circuitry and significantly reduce the number of features required for signal classification. We present three possible DASP algorithms that leverage frequency harmonics, modulation alignments, and frequency peak spacings, along with a two-dimensional image manipulation method for statistical feature extraction. To demonstrate the ability of DASP to generate relevant features from URE, we measured the conducted URE from 14 residential electronic devices using a 2 MS/s collection system. Furthermore, a linear discriminant analysis classifier was trained using DASP generated features and was blind tested resulting in a greater than 90% classification accuracy for each of the DASP algorithms and an accuracy of 99.1% when DASP features are used in combination. Furthermore, we show that a rank reduced feature set of the combined DASP algorithms provides a 98.9% classification accuracy with only three features and outperforms a set of spectral features in terms of general classification as well as applicability across a broad number of devices.« less

  16. A Dimensionally Aligned Signal Projection for Classification of Unintended Radiated Emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vann, Jason Michael; Karnowski, Thomas P.; Kerekes, Ryan

    Characterization of unintended radiated emissions (URE) from electronic devices plays an important role in many research areas from electromagnetic interference to nonintrusive load monitoring to information system security. URE can provide insights for applications ranging from load disaggregation and energy efficiency to condition-based maintenance of equipment-based upon detected fault conditions. URE characterization often requires subject matter expertise to tailor transforms and feature extractors for the specific electrical devices of interest. We present a novel approach, named dimensionally aligned signal projection (DASP), for projecting aligned signal characteristics that are inherent to the physical implementation of many commercial electronic devices. These projectionsmore » minimize the need for an intimate understanding of the underlying physical circuitry and significantly reduce the number of features required for signal classification. We present three possible DASP algorithms that leverage frequency harmonics, modulation alignments, and frequency peak spacings, along with a two-dimensional image manipulation method for statistical feature extraction. To demonstrate the ability of DASP to generate relevant features from URE, we measured the conducted URE from 14 residential electronic devices using a 2 MS/s collection system. Furthermore, a linear discriminant analysis classifier was trained using DASP generated features and was blind tested resulting in a greater than 90% classification accuracy for each of the DASP algorithms and an accuracy of 99.1% when DASP features are used in combination. Furthermore, we show that a rank reduced feature set of the combined DASP algorithms provides a 98.9% classification accuracy with only three features and outperforms a set of spectral features in terms of general classification as well as applicability across a broad number of devices.« less

  17. Application of the Covalent Bond Classification Method for the Teaching of Inorganic Chemistry

    ERIC Educational Resources Information Center

    Green, Malcolm L. H.; Parkin, Gerard

    2014-01-01

    The Covalent Bond Classification (CBC) method provides a means to classify covalent molecules according to the number and types of bonds that surround an atom of interest. This approach is based on an elementary molecular orbital analysis of the bonding involving the central atom (M), with the various interactions being classified according to the…

  18. Dependency-dependent interference: NPI interference, agreement attraction, and global pragmatic inferences.

    PubMed

    Xiang, Ming; Grove, Julian; Giannakidou, Anastasia

    2013-01-01

    Previous psycholinguistics studies have shown that when forming a long distance dependency in online processing, the parser sometimes accepts a sentence even though the required grammatical constraints are only partially met. A mechanistic account of how such errors arise sheds light on both the underlying linguistic representations involved and the processing mechanisms that put such representations together. In the current study, we contrast the negative polarity items (NPI) interference effect, as shown by the acceptance of an ungrammatical sentence like "The bills that democratic senators have voted for will ever become law," with the well-known phenomenon of agreement attraction ("The key to the cabinets are … "). On the surface, these two types of errors look alike and thereby can be explained as being driven by the same source: similarity based memory interference. However, we argue that the linguistic representations involved in NPI licensing are substantially different from those of subject-verb agreement, and therefore the interference effects in each domain potentially arise from distinct sources. In particular, we show that NPI interference at least partially arises from pragmatic inferences. In a self-paced reading study with an acceptability judgment task, we showed NPI interference was modulated by participants' general pragmatic communicative skills, as quantified by the Autism-Spectrum Quotient (AQ, Baron-Cohen et al., 2001), especially in offline tasks. Participants with more autistic traits were actually less prone to the NPI interference effect than those with fewer autistic traits. This result contrasted with agreement attraction conditions, which were not influenced by individual pragmatic skill differences. We also show that different NPI licensors seem to have distinct interference profiles. We discuss two kinds of interference effects for NPI licensing: memory-retrieval based and pragmatically triggered.

  19. Dependency-dependent interference: NPI interference, agreement attraction, and global pragmatic inferences

    PubMed Central

    Xiang, Ming; Grove, Julian; Giannakidou, Anastasia

    2013-01-01

    Previous psycholinguistics studies have shown that when forming a long distance dependency in online processing, the parser sometimes accepts a sentence even though the required grammatical constraints are only partially met. A mechanistic account of how such errors arise sheds light on both the underlying linguistic representations involved and the processing mechanisms that put such representations together. In the current study, we contrast the negative polarity items (NPI) interference effect, as shown by the acceptance of an ungrammatical sentence like “The bills that democratic senators have voted for will ever become law,” with the well-known phenomenon of agreement attraction (“The key to the cabinets are … ”). On the surface, these two types of errors look alike and thereby can be explained as being driven by the same source: similarity based memory interference. However, we argue that the linguistic representations involved in NPI licensing are substantially different from those of subject-verb agreement, and therefore the interference effects in each domain potentially arise from distinct sources. In particular, we show that NPI interference at least partially arises from pragmatic inferences. In a self-paced reading study with an acceptability judgment task, we showed NPI interference was modulated by participants' general pragmatic communicative skills, as quantified by the Autism-Spectrum Quotient (AQ, Baron-Cohen et al., 2001), especially in offline tasks. Participants with more autistic traits were actually less prone to the NPI interference effect than those with fewer autistic traits. This result contrasted with agreement attraction conditions, which were not influenced by individual pragmatic skill differences. We also show that different NPI licensors seem to have distinct interference profiles. We discuss two kinds of interference effects for NPI licensing: memory-retrieval based and pragmatically triggered. PMID:24109468

  20. Classification of drug molecules considering their IC50 values using mixed-integer linear programming based hyper-boxes method.

    PubMed

    Armutlu, Pelin; Ozdemir, Muhittin E; Uney-Yuksektepe, Fadime; Kavakli, I Halil; Turkay, Metin

    2008-10-03

    A priori analysis of the activity of drugs on the target protein by computational approaches can be useful in narrowing down drug candidates for further experimental tests. Currently, there are a large number of computational methods that predict the activity of drugs on proteins. In this study, we approach the activity prediction problem as a classification problem and, we aim to improve the classification accuracy by introducing an algorithm that combines partial least squares regression with mixed-integer programming based hyper-boxes classification method, where drug molecules are classified as low active or high active regarding their binding activity (IC50 values) on target proteins. We also aim to determine the most significant molecular descriptors for the drug molecules. We first apply our approach by analyzing the activities of widely known inhibitor datasets including Acetylcholinesterase (ACHE), Benzodiazepine Receptor (BZR), Dihydrofolate Reductase (DHFR), Cyclooxygenase-2 (COX-2) with known IC50 values. The results at this stage proved that our approach consistently gives better classification accuracies compared to 63 other reported classification methods such as SVM, Naïve Bayes, where we were able to predict the experimentally determined IC50 values with a worst case accuracy of 96%. To further test applicability of this approach we first created dataset for Cytochrome P450 C17 inhibitors and then predicted their activities with 100% accuracy. Our results indicate that this approach can be utilized to predict the inhibitory effects of inhibitors based on their molecular descriptors. This approach will not only enhance drug discovery process, but also save time and resources committed.

  1. "Quantum Interference with Slits" Revisited

    ERIC Educational Resources Information Center

    Rothman, Tony; Boughn, Stephen

    2011-01-01

    Marcella has presented a straightforward technique employing the Dirac formalism to calculate single- and double-slit interference patterns. He claims that no reference is made to classical optics or scattering theory and that his method therefore provides a purely quantum mechanical description of these experiments. He also presents his…

  2. Localizing text in scene images by boundary clustering, stroke segmentation, and string fragment classification.

    PubMed

    Yi, Chucai; Tian, Yingli

    2012-09-01

    In this paper, we propose a novel framework to extract text regions from scene images with complex backgrounds and multiple text appearances. This framework consists of three main steps: boundary clustering (BC), stroke segmentation, and string fragment classification. In BC, we propose a new bigram-color-uniformity-based method to model both text and attachment surface, and cluster edge pixels based on color pairs and spatial positions into boundary layers. Then, stroke segmentation is performed at each boundary layer by color assignment to extract character candidates. We propose two algorithms to combine the structural analysis of text stroke with color assignment and filter out background interferences. Further, we design a robust string fragment classification based on Gabor-based text features. The features are obtained from feature maps of gradient, stroke distribution, and stroke width. The proposed framework of text localization is evaluated on scene images, born-digital images, broadcast video images, and images of handheld objects captured by blind persons. Experimental results on respective datasets demonstrate that the framework outperforms state-of-the-art localization algorithms.

  3. Multistep method to deal with large datasets in asteroid family classification

    NASA Astrophysics Data System (ADS)

    Knežević, Z.; Milani, A.; Cellino, A.; Novaković, B.; Spoto, F.; Paolicchi, P.

    2014-07-01

    A fast increase in the number of asteroids with accurately determined orbits and with known physical properties makes it more and more challenging to perform, maintain, and update a classification of asteroids into families. We have therefore developed a new approach to the family classification by combining the Hierarchical Clustering Method (HCM) [1] to identify the families with an automated method to add members to already known families. This procedure makes use of the maximum available information, in particular, of that contained in the proper elements catalog [2]. The catalog of proper elements and absolute magnitudes used in our study contains 336 319 numbered asteroids with an information content of 16.31 Mb. The WISE catalog of albedos [3] and SDSS catalog of color indexes [4] contain 94 632 and 59 975 entries, respectively, with a total amount of information of 0.93 Mb. Our procedure makes use of the segmentation of the proper elements catalog by semimajor axis, to deal with a manageable number of objects in each zone, and by inclination, to account for lower density of high-inclination objects. By selecting from the catalog a much smaller number of large asteroids, in the first step, we identify a number of core families; to these, in the second step, we attribute the next layer of smaller objects. In the third step, we remove all the family members from the catalog, and reapply the HCM to the rest; this gives both satellite families which extend the core families and new independent families, consisting mainly of small asteroids. These two cases are separated in the fourth step by attribution of another layer of new members and by merging intersecting families. This leads to a classification with 128 families and 87 095 members. The list of members is updated automatically with each update of the proper elements catalog, and this represents the final and repetitive step of the procedure. Changes in the list of families are not automated.

  4. A compressed sensing method with analytical results for lidar feature classification

    NASA Astrophysics Data System (ADS)

    Allen, Josef D.; Yuan, Jiangbo; Liu, Xiuwen; Rahmes, Mark

    2011-04-01

    We present an innovative way to autonomously classify LiDAR points into bare earth, building, vegetation, and other categories. One desirable product of LiDAR data is the automatic classification of the points in the scene. Our algorithm automatically classifies scene points using Compressed Sensing Methods via Orthogonal Matching Pursuit algorithms utilizing a generalized K-Means clustering algorithm to extract buildings and foliage from a Digital Surface Models (DSM). This technology reduces manual editing while being cost effective for large scale automated global scene modeling. Quantitative analyses are provided using Receiver Operating Characteristics (ROC) curves to show Probability of Detection and False Alarm of buildings vs. vegetation classification. Histograms are shown with sample size metrics. Our inpainting algorithms then fill the voids where buildings and vegetation were removed, utilizing Computational Fluid Dynamics (CFD) techniques and Partial Differential Equations (PDE) to create an accurate Digital Terrain Model (DTM) [6]. Inpainting preserves building height contour consistency and edge sharpness of identified inpainted regions. Qualitative results illustrate other benefits such as Terrain Inpainting's unique ability to minimize or eliminate undesirable terrain data artifacts.

  5. Prevalence of rheumatoid arthritis in persons 60 years of age and older in the United States: effect of different methods of case classification.

    PubMed

    Rasch, Elizabeth K; Hirsch, Rosemarie; Paulose-Ram, Ryne; Hochberg, Marc C

    2003-04-01

    To determine prevalence estimates for rheumatoid arthritis (RA) in noninstitutionalized older adults in the US. Prevalence estimates were compared using 3 different classification methods based on current classification criteria for RA. Data from the Third National Health and Nutrition Examination Survey (NHANES-III) were used to generate prevalence estimates by 3 classification methods in persons 60 years of age and older (n = 5,302). Method 1 applied the "n of k" rule, such that subjects who met 3 of 6 of the American College of Rheumatology (ACR) 1987 criteria were classified as having RA (data from hand radiographs were not available). In method 2, the ACR classification tree algorithm was applied. For method 3, medication data were used to augment case identification via method 2. Population prevalence estimates and 95% confidence intervals (95% CIs) were determined using the 3 methods on data stratified by sex, race/ethnicity, age, and education. Overall prevalence estimates using the 3 classification methods were 2.03% (95% CI 1.30-2.76), 2.15% (95% CI 1.43-2.87), and 2.34% (95% CI 1.66-3.02), respectively. The prevalence of RA was generally greater in the following groups: women, Mexican Americans, respondents with less education, and respondents who were 70 years of age and older. The prevalence of RA in persons 60 years of age and older is approximately 2%, representing the proportion of the US elderly population who will most likely require medical intervention because of disease activity. Different classification methods yielded similar prevalence estimates, although detection of RA was enhanced by incorporation of data on use of prescription medications, an important consideration in large population surveys.

  6. Hazard property classification of waste according to the recent propositions of the EC using different methods.

    PubMed

    Hennebert, Pierre; van der Sloot, Hans A; Rebischung, Flore; Weltens, Reinhilde; Geerts, Lieve; Hjelmar, Ole

    2014-10-01

    Hazard classification of waste is a necessity, but the hazard properties (named "H" and soon "HP") are still not all defined in a practical and operational manner at EU level. Following discussion of subsequent draft proposals from the Commission there is still no final decision. Methods to implement the proposals have recently been proposed: tests methods for physical risks, test batteries for aquatic and terrestrial ecotoxicity, an analytical package for exhaustive determination of organic substances and mineral elements, surrogate methods for the speciation of mineral elements in mineral substances in waste, and calculation methods for human toxicity and ecotoxicity with M factors. In this paper the different proposed methods have been applied to a large assortment of solid and liquid wastes (>100). Data for 45 wastes - documented with extensive chemical analysis and flammability test - were assessed in terms of the different HP criteria and results were compared to LoW for lack of an independent classification. For most waste streams the classification matches with the designation provided in the LoW. This indicates that the criteria used by LoW are similar to the HP limit values. This data set showed HP 14 'Ecotoxic chronic' is the most discriminating HP. All wastes classified as acute ecotoxic are also chronic ecotoxic and the assessment of acute ecotoxicity separately is therefore not needed. The high number of HP 14 classified wastes is due to the very low limit values when stringent M factors are applied to total concentrations (worst case method). With M factor set to 1 the classification method is not sufficiently discriminating between hazardous and non-hazardous materials. The second most frequent hazard is HP 7 'Carcinogenic'. The third most frequent hazard is HP 10 'Toxic for reproduction' and the fourth most frequent hazard is HP 4 "Irritant - skin irritation and eye damage". In a stepwise approach, it seems relevant to assess HP 14 first, then, if

  7. Classification and Framing in the Case Method: Discussion Leaders' Questions

    ERIC Educational Resources Information Center

    Badger, James

    2010-01-01

    Basil Bernstein's classification and framing was adopted as a theoretical model to analyse the instruction of two university professors who incorporated case studies into their graduate business and education courses. Classification and framing allows for a meaningful analysis of the discussion leader's questions that facilitate students'…

  8. Excitonic quantum interference in a quantum dot chain with rings.

    PubMed

    Hong, Suc-Kyoung; Nam, Seog Woo; Yeon, Kyu-Hwang

    2008-04-16

    We demonstrate excitonic quantum interference in a closely spaced quantum dot chain with nanorings. In the resonant dipole-dipole interaction model with direct diagonalization method, we have found a peculiar feature that the excitation of specified quantum dots in the chain is completely inhibited, depending on the orientational configuration of the transition dipole moments and specified initial preparation of the excitation. In practice, these excited states facilitating quantum interference can provide a conceptual basis for quantum interference devices of excitonic hopping.

  9. Multiple-3D-object secure information system based on phase shifting method and single interference.

    PubMed

    Li, Wei-Na; Shi, Chen-Xiao; Piao, Mei-Lan; Kim, Nam

    2016-05-20

    We propose a multiple-3D-object secure information system for encrypting multiple three-dimensional (3D) objects based on the three-step phase shifting method. During the decryption procedure, five phase functions (PFs) are decreased to three PFs, in comparison with our previous method, which implies that one cross beam splitter is utilized to implement the single decryption interference. Moreover, the advantages of the proposed scheme also include: each 3D object can be decrypted discretionarily without decrypting a series of other objects earlier; the quality of the decrypted slice image of each object is high according to the correlation coefficient values, none of which is lower than 0.95; no iterative algorithm is involved. The feasibility of the proposed scheme is demonstrated by computer simulation results.

  10. Energy-efficient algorithm for classification of states of wireless sensor network using machine learning methods

    NASA Astrophysics Data System (ADS)

    Yuldashev, M. N.; Vlasov, A. I.; Novikov, A. N.

    2018-05-01

    This paper focuses on the development of an energy-efficient algorithm for classification of states of a wireless sensor network using machine learning methods. The proposed algorithm reduces energy consumption by: 1) elimination of monitoring of parameters that do not affect the state of the sensor network, 2) reduction of communication sessions over the network (the data are transmitted only if their values can affect the state of the sensor network). The studies of the proposed algorithm have shown that at classification accuracy close to 100%, the number of communication sessions can be reduced by 80%.

  11. [WHO classification of head and neck tumours 2017: Main novelties and update of diagnostic methods].

    PubMed

    Sarradin, Victor; Siegfried, Aurore; Uro-Coste, Emmanuelle; Delord, Jean-Pierre

    2018-06-01

    The publication of the new WHO classification of head and neck tumours in 2017 brought major modifications. Especially, a new chapter is dedicated to the oropharynx, focusing on the description of squamous cell carcinoma induced by the virus Human Papilloma Virus (HPV), and new entities of tumors are described in nasal cavities and sinuses. In this article are presented the novelties and main changes of this new classification, as well as the updates of the diagnostic methods (immunohistochemistry, cytogenetics or molecular biology). Copyright © 2018 Société Française du Cancer. Published by Elsevier Masson SAS. All rights reserved.

  12. The Classification of Ground Roasted Decaffeinated Coffee Using UV-VIS Spectroscopy and SIMCA Method

    NASA Astrophysics Data System (ADS)

    Yulia, M.; Asnaning, A. R.; Suhandy, D.

    2018-05-01

    In this work, an investigation on the classification between decaffeinated and non- decaffeinated coffee samples using UV-VIS spectroscopy and SIMCA method was investigated. Total 200 samples of ground roasted coffee were used (100 samples for decaffeinated coffee and 100 samples for non-decaffeinated coffee). After extraction and dilution, the spectra of coffee samples solution were acquired using a UV-VIS spectrometer (Genesys™ 10S UV-VIS, Thermo Scientific, USA) in the range of 190-1100 nm. The multivariate analyses of the spectra were performed using principal component analysis (PCA) and soft independent modeling of class analogy (SIMCA). The SIMCA model showed that the classification between decaffeinated and non-decaffeinated coffee samples was detected with 100% sensitivity and specificity.

  13. A New Digital Signal Processing Method for Spectrum Interference Monitoring

    NASA Astrophysics Data System (ADS)

    Angrisani, L.; Capriglione, D.; Ferrigno, L.; Miele, G.

    2011-01-01

    Frequency spectrum is a limited shared resource, nowadays interested by an ever growing number of different applications. Generally, the companies providing such services pay to the governments the right of using a limited portion of the spectrum, consequently they would be assured that the licensed radio spectrum resource is not interested by significant external interferences. At the same time, they have to guarantee that their devices make an efficient use of the spectrum and meet the electromagnetic compatibility regulations. Therefore the competent authorities are called to control the access to the spectrum adopting suitable management and monitoring policies, as well as the manufacturers have to periodically verify the correct working of their apparatuses. Several measurement solutions are present on the market. They generally refer to real-time spectrum analyzers and measurement receivers. Both of them are characterized by good metrological accuracies but show costs, dimensions and weights that make no possible a use "on the field". The paper presents a first step in realizing a digital signal processing based measurement instrument able to suitably accomplish for the above mentioned needs. In particular the attention has been given to the DSP based measurement section of the instrument. To these aims an innovative measurement method for spectrum monitoring and management is proposed in this paper. It performs an efficient sequential analysis based on a sample by sample digital processing. Three main issues are in particular pursued: (i) measurement performance comparable to that exhibited by other methods proposed in literature; (ii) fast measurement time, (iii) easy implementation on cost-effective measurement hardware.

  14. Intraindividual Coupling of Daily Stressors and Cognitive Interference in Old Age

    PubMed Central

    Mogle, Jacqueline; Sliwinski, Martin J.

    2011-01-01

    Objectives. The current study examined emotional and cognitive reactions to daily stress. We examined the psychometric properties of a short cognitive interference measure and how cognitive interference was associated with measures of daily stress and negative affect (NA) between persons and within persons over time. Methods. A sample of 87 older adults (Mage = 83, range = 70–97, 28% male) completed measures of daily stress, cognitive interference, and NA on 6 days within a 14-day period. Results. The measure yielded a single-factor solution with good reliability both between and within persons. At the between-person level, NA accounted for the effects of daily stress on individual differences in cognitive interference. At the within-person level, NA and daily stress were unique predictors of cognitive interference. Furthermore, the within-person effect of daily stress on cognitive interference decreased significantly with age. Discussion. These results support theoretical work regarding associations among stress, NA, and cognitive interference, both across persons and within persons over time. PMID:21743045

  15. Study on a pattern classification method of soil quality based on simplified learning sample dataset

    USGS Publications Warehouse

    Zhang, Jiahua; Liu, S.; Hu, Y.; Tian, Y.

    2011-01-01

    Based on the massive soil information in current soil quality grade evaluation, this paper constructed an intelligent classification approach of soil quality grade depending on classical sampling techniques and disordered multiclassification Logistic regression model. As a case study to determine the learning sample capacity under certain confidence level and estimation accuracy, and use c-means algorithm to automatically extract the simplified learning sample dataset from the cultivated soil quality grade evaluation database for the study area, Long chuan county in Guangdong province, a disordered Logistic classifier model was then built and the calculation analysis steps of soil quality grade intelligent classification were given. The result indicated that the soil quality grade can be effectively learned and predicted by the extracted simplified dataset through this method, which changed the traditional method for soil quality grade evaluation. ?? 2011 IEEE.

  16. An improved SRC method based on virtual samples for face recognition

    NASA Astrophysics Data System (ADS)

    Fu, Lijun; Chen, Deyun; Lin, Kezheng; Li, Ao

    2018-07-01

    The sparse representation classifier (SRC) performs classification by evaluating which class leads to the minimum representation error. However, in real world, the number of available training samples is limited due to noise interference, training samples cannot accurately represent the test sample linearly. Therefore, in this paper, we first produce virtual samples by exploiting original training samples at the aim of increasing the number of training samples. Then, we take the intra-class difference as data representation of partial noise, and utilize the intra-class differences and training samples simultaneously to represent the test sample in a linear way according to the theory of SRC algorithm. Using weighted score level fusion, the respective representation scores of the virtual samples and the original training samples are fused together to obtain the final classification results. The experimental results on multiple face databases show that our proposed method has a very satisfactory classification performance.

  17. Wind tunnel wall interference

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Mineck, Raymond E.; Barnwell, Richard W.; Kemp, William B., Jr.

    1986-01-01

    About a decade ago, interest in alleviating wind tunnel wall interference was renewed by advances in computational aerodynamics, concepts of adaptive test section walls, and plans for high Reynolds number transonic test facilities. Selection of NASA Langley cryogenic concept for the National Transonic Facility (NTF) tended to focus the renewed wall interference efforts. A brief overview and current status of some Langley sponsored transonic wind tunnel wall interference research are presented. Included are continuing efforts in basic wall flow studies, wall interference assessment/correction procedures, and adaptive wall technology.

  18. A New Method for Suppressing Periodic Narrowband Interference Based on the Chaotic van der Pol Oscillator

    NASA Astrophysics Data System (ADS)

    Lu, Jia; Zhang, Xiaoxing; Xiong, Hao

    The chaotic van der Pol oscillator is a powerful tool for detecting defects in electric systems by using online partial discharge (PD) monitoring. This paper focuses on realizing weak PD signal detection in the strong periodic narrowband interference by using high sensitivity to the periodic narrowband interference signals and immunity to white noise and PD signals of chaotic systems. A new approach to removing the periodic narrowband interference by using a van der Pol chaotic oscillator is described by analyzing the motion characteristic of the chaotic oscillator on the basis of the van der Pol equation. Furthermore, the Floquet index for measuring the amplitude of periodic narrowband signals is redefined. The denoising signal processed by the chaotic van der Pol oscillators is further processed by wavelet analysis. Finally, the denoising results verify that the periodic narrowband and white noise interference can be removed efficiently by combining the theory of the chaotic van der Pol oscillator and wavelet analysis.

  19. Estimation of lactose interference in vaccines and a proposal of methodological adjustment of total protein determination by the lowry method.

    PubMed

    Kusunoki, Hideki; Okuma, Kazu; Hamaguchi, Isao

    2012-01-01

    For national regulatory testing in Japan, the Lowry method is used for the determination of total protein content in vaccines. However, many substances are known to interfere with the Lowry method, rendering accurate estimation of protein content difficult. To accurately determine the total protein content in vaccines, it is necessary to identify the major interfering substances and improve the methodology for removing such substances. This study examined the effects of high levels of lactose with low levels of protein in freeze-dried, cell culture-derived Japanese encephalitis vaccine (inactivated). Lactose was selected because it is a reducing sugar that is expected to interfere with the Lowry method. Our results revealed that concentrations of ≥ 0.1 mg/mL lactose interfered with the Lowry assays and resulted in overestimation of the protein content in a lactose concentration-dependent manner. On the other hand, our results demonstrated that it is important for the residual volume to be ≤ 0.05 mL after trichloroacetic acid precipitation in order to avoid the effects of lactose. Thus, the method presented here is useful for accurate protein determination by the Lowry method, even when it is used for determining low levels of protein in vaccines containing interfering substances. In this study, we have reported a methodological adjustment that allows accurate estimation of protein content for national regulatory testing, when the vaccine contains interfering substances.

  20. Interference-Fit Life Factors for Roller Bearings

    NASA Technical Reports Server (NTRS)

    Oswald, Fred B.; Zaretsky, Erwin V.; Poplawski, Joseph V.

    2009-01-01

    The effect of hoop stresses in reducing cylindrical roller bearing fatigue life was determined for various classes of inner-ring interference fit. Calculations were performed for up to 7 fit classes for each of 10 bearing sizes. The hoop stresses were superimposed on the Hertzian principal stresses created by the applied radial load to calculate roller bearing fatigue life. A method was developed through a series of equations to calculate the life reduction for cylindrical roller bearings. All calculated lives are for zero initial internal clearance. Any reduction in bearing clearance due to interference fit would be compensated by increasing the initial (unmounted) clearance. Results are presented as tables and charts of life factors for bearings with light, moderate, and heavy loads and interference fits ranging from extremely light to extremely heavy for bearing accuracy class RBEC-5 (ISO class 5). Interference fits on the inner ring of a cylindrical roller bearing can significantly reduce bearing fatigue life. In general, life factors are smaller (lower life) for bearings running under light load where the unfactored life is highest. The various bearing series within a particular bore size had almost identical interference-fit life factors for a particular fit. The tightest fit at the high end of the tolerance band produces a life factor of approximately 0.40 for an inner-race maximum Hertz stress of 1200 MPa (175 ksi) and a life factor of 0.60 for an inner-race maximum Hertz stress of 2200 MPa (320 ksi). Interference fits also impact the maximum Hertz stress-life relation.

  1. A combined reconstruction-classification method for diffuse optical tomography.

    PubMed

    Hiltunen, P; Prince, S J D; Arridge, S

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  2. Computer classification of remotely sensed multispectral image data by extraction and classification of homogeneous objects

    NASA Technical Reports Server (NTRS)

    Kettig, R. L.

    1975-01-01

    A method of classification of digitized multispectral images is developed and experimentally evaluated on actual earth resources data collected by aircraft and satellite. The method is designed to exploit the characteristic dependence between adjacent states of nature that is neglected by the more conventional simple-symmetric decision rule. Thus contextual information is incorporated into the classification scheme. The principle reason for doing this is to improve the accuracy of the classification. For general types of dependence this would generally require more computation per resolution element than the simple-symmetric classifier. But when the dependence occurs in the form of redundance, the elements can be classified collectively, in groups, therby reducing the number of classifications required.

  3. Supervised DNA Barcodes species classification: analysis, comparisons and results

    PubMed Central

    2014-01-01

    Background Specific fragments, coming from short portions of DNA (e.g., mitochondrial, nuclear, and plastid sequences), have been defined as DNA Barcode and can be used as markers for organisms of the main life kingdoms. Species classification with DNA Barcode sequences has been proven effective on different organisms. Indeed, specific gene regions have been identified as Barcode: COI in animals, rbcL and matK in plants, and ITS in fungi. The classification problem assigns an unknown specimen to a known species by analyzing its Barcode. This task has to be supported with reliable methods and algorithms. Methods In this work the efficacy of supervised machine learning methods to classify species with DNA Barcode sequences is shown. The Weka software suite, which includes a collection of supervised classification methods, is adopted to address the task of DNA Barcode analysis. Classifier families are tested on synthetic and empirical datasets belonging to the animal, fungus, and plant kingdoms. In particular, the function-based method Support Vector Machines (SVM), the rule-based RIPPER, the decision tree C4.5, and the Naïve Bayes method are considered. Additionally, the classification results are compared with respect to ad-hoc and well-established DNA Barcode classification methods. Results A software that converts the DNA Barcode FASTA sequences to the Weka format is released, to adapt different input formats and to allow the execution of the classification procedure. The analysis of results on synthetic and real datasets shows that SVM and Naïve Bayes outperform on average the other considered classifiers, although they do not provide a human interpretable classification model. Rule-based methods have slightly inferior classification performances, but deliver the species specific positions and nucleotide assignments. On synthetic data the supervised machine learning methods obtain superior classification performances with respect to the traditional DNA Barcode

  4. Risk-Based Prioritization Method for the Classification of Groundwater Pollution from Hazardous Waste Landfills.

    PubMed

    Yang, Yu; Jiang, Yong-Hai; Lian, Xin-Ying; Xi, Bei-Dou; Ma, Zhi-Fei; Xu, Xiang-Jian; An, Da

    2016-12-01

    Hazardous waste landfill sites are a significant source of groundwater pollution. To ensure that these landfills with a significantly high risk of groundwater contamination are properly managed, a risk-based ranking method related to groundwater contamination is needed. In this research, a risk-based prioritization method for the classification of groundwater pollution from hazardous waste landfills was established. The method encompasses five phases, including risk pre-screening, indicator selection, characterization, classification and, lastly, validation. In the risk ranking index system employed here, 14 indicators involving hazardous waste landfills and migration in the vadose zone as well as aquifer were selected. The boundary of each indicator was determined by K-means cluster analysis and the weight of each indicator was calculated by principal component analysis. These methods were applied to 37 hazardous waste landfills in China. The result showed that the risk for groundwater contamination from hazardous waste landfills could be ranked into three classes from low to high risk. In all, 62.2 % of the hazardous waste landfill sites were classified in the low and medium risk classes. The process simulation method and standardized anomalies were used to validate the result of risk ranking; the results were consistent with the simulated results related to the characteristics of contamination. The risk ranking method was feasible, valid and can provide reference data related to risk management for groundwater contamination at hazardous waste landfill sites.

  5. Effective classification of the prevalence of Schistosoma mansoni.

    PubMed

    Mitchell, Shira A; Pagano, Marcello

    2012-12-01

    To present an effective classification method based on the prevalence of Schistosoma mansoni in the community. We created decision rules (defined by cut-offs for number of positive slides), which account for imperfect sensitivity, both with a simple adjustment of fixed sensitivity and with a more complex adjustment of changing sensitivity with prevalence. To reduce screening costs while maintaining accuracy, we propose a pooled classification method. To estimate sensitivity, we use the De Vlas model for worm and egg distributions. We compare the proposed method with the standard method to investigate differences in efficiency, measured by number of slides read, and accuracy, measured by probability of correct classification. Modelling varying sensitivity lowers the lower cut-off more significantly than the upper cut-off, correctly classifying regions as moderate rather than lower, thus receiving life-saving treatment. The classification method goes directly to classification on the basis of positive pools, avoiding having to know sensitivity to estimate prevalence. For model parameter values describing worm and egg distributions among children, the pooled method with 25 slides achieves an expected 89.9% probability of correct classification, whereas the standard method with 50 slides achieves 88.7%. Among children, it is more efficient and more accurate to use the pooled method for classification of S. mansoni prevalence than the current standard method. © 2012 Blackwell Publishing Ltd.

  6. A novel channel selection method for optimal classification in different motor imagery BCI paradigms.

    PubMed

    Shan, Haijun; Xu, Haojie; Zhu, Shanan; He, Bin

    2015-10-21

    For sensorimotor rhythms based brain-computer interface (BCI) systems, classification of different motor imageries (MIs) remains a crucial problem. An important aspect is how many scalp electrodes (channels) should be used in order to reach optimal performance classifying motor imaginations. While the previous researches on channel selection mainly focus on MI tasks paradigms without feedback, the present work aims to investigate the optimal channel selection in MI tasks paradigms with real-time feedback (two-class control and four-class control paradigms). In the present study, three datasets respectively recorded from MI tasks experiment, two-class control and four-class control experiments were analyzed offline. Multiple frequency-spatial synthesized features were comprehensively extracted from every channel, and a new enhanced method IterRelCen was proposed to perform channel selection. IterRelCen was constructed based on Relief algorithm, but was enhanced from two aspects: change of target sample selection strategy and adoption of the idea of iterative computation, and thus performed more robust in feature selection. Finally, a multiclass support vector machine was applied as the classifier. The least number of channels that yield the best classification accuracy were considered as the optimal channels. One-way ANOVA was employed to test the significance of performance improvement among using optimal channels, all the channels and three typical MI channels (C3, C4, Cz). The results show that the proposed method outperformed other channel selection methods by achieving average classification accuracies of 85.2, 94.1, and 83.2 % for the three datasets, respectively. Moreover, the channel selection results reveal that the average numbers of optimal channels were significantly different among the three MI paradigms. It is demonstrated that IterRelCen has a strong ability for feature selection. In addition, the results have shown that the numbers of optimal

  7. Wall Interference in Wind Tunnels

    DTIC Science & Technology

    1982-09-01

    concept is fairly succesful in these cases, although some variation of the interference flow field still remains over the airfoil chord. One should...geometry and about as large as we think the tunnel ’ j? can accommodate. These pressures have brought to the fore the need to improve our methods of...methods cease to be reliable? I would like -to return to that point again in a moment. Another point that I think is worth drawing your attention to

  8. Topography of hidden objects using THz digital holography with multi-beam interferences.

    PubMed

    Valzania, Lorenzo; Zolliker, Peter; Hack, Erwin

    2017-05-15

    We present a method for the separation of the signal scattered from an object hidden behind a THz-transparent sample in the framework of THz digital holography in reflection. It combines three images of different interference patterns to retrieve the amplitude and phase distribution of the object beam. Comparison of simulated with experimental images obtained from a metallic resolution target behind a Teflon plate demonstrates that the interference patterns can be described in the simple form of three-beam interference. Holographic reconstructions after the application of the method show a considerable improvement compared to standard reconstructions exclusively based on Fourier transform phase retrieval.

  9. Thick Film Interference.

    ERIC Educational Resources Information Center

    Trefil, James

    1983-01-01

    Discusses why interference effects cannot be seen with a thick film, starting with a review of the origin of interference patterns in thin films. Considers properties of materials in films, properties of the light source, and the nature of light. (JN)

  10. Power line interference attenuation in multi-channel sEMG signals: Algorithms and analysis.

    PubMed

    Soedirdjo, S D H; Ullah, K; Merletti, R

    2015-08-01

    Electromyogram (EMG) recordings are often corrupted by power line interference (PLI) even though the skin is prepared and well-designed instruments are used. This study focuses on the analysis of some of the recent and classical existing digital signal processing approaches have been used to attenuate, if not eliminate, the power line interference from EMG signals. A comparison of the signal to interference ratio (SIR) of the output signals is presented, for four methods: classical notch filter, spectral interpolation, adaptive noise canceller with phase locked loop (ANC-PLL) and adaptive filter, applied to simulated multichannel monopolar EMG signals with different SIR. The effect of each method on the shape of the EMG signals is also analyzed. The results show that ANC-PLL method gives the best output SIR and lowest shape distortion compared to the other methods. Classical notch filtering is the simplest method but some information might be lost as it removes both the interference and the EMG signals. Thus, it is obvious that notch filter has the lowest performance and it introduces distortion into the resulting signals.

  11. Quantitative DIC microscopy using an off-axis self-interference approach.

    PubMed

    Fu, Dan; Oh, Seungeun; Choi, Wonshik; Yamauchi, Toyohiko; Dorn, August; Yaqoob, Zahid; Dasari, Ramachandra R; Feld, Michael S

    2010-07-15

    Traditional Normarski differential interference contrast (DIC) microscopy is a very powerful method for imaging nonstained biological samples. However, one of its major limitations is the nonquantitative nature of the imaging. To overcome this problem, we developed a quantitative DIC microscopy method based on off-axis sample self-interference. The digital holography algorithm is applied to obtain quantitative phase gradients in orthogonal directions, which leads to a quantitative phase image through a spiral integration of the phase gradients. This method is practically simple to implement on any standard microscope without stringent requirements on polarization optics. Optical sectioning can be obtained through enlarged illumination NA.

  12. REM sleep rescues learning from interference

    PubMed Central

    McDevitt, Elizabeth A.; Duggan, Katherine A.; Mednick, Sara C.

    2015-01-01

    Classical human memory studies investigating the acquisition of temporally-linked events have found that the memories for two events will interfere with each other and cause forgetting (i.e., interference; Wixted, 2004). Importantly, sleep helps consolidate memories and protect them from subsequent interference (Ellenbogen, Hulbert, Stickgold, Dinges, & Thompson-Schill, 2006). We asked whether sleep can also repair memories that have already been damaged by interference. Using a perceptual learning paradigm, we induced interference either before or after a consolidation period. We varied brain states during consolidation by comparing active wake, quiet wake, and naps with either non-rapid eye movement sleep (NREM), or both NREM and REM sleep. When interference occurred after consolidation, sleep and wake both produced learning. However, interference prior to consolidation impaired memory, with retroactive interference showing more disruption than proactive interference. Sleep rescued learning damaged by interference. Critically, only naps that contained REM sleep were able to rescue learning that was highly disrupted by retroactive interference. Furthermore, the magnitude of rescued learning was correlated with the amount of REM sleep. We demonstrate the first evidence of a process by which the brain can rescue and consolidate memories damaged by interference, and that this process requires REM sleep. We explain these results within a theoretical model that considers how interference during encoding interacts with consolidation processes to predict which memories are retained or lost. PMID:25498222

  13. Rapid, simultaneous and interference-free determination of three rhodamine dyes illegally added into chilli samples using excitation-emission matrix fluorescence coupled with second-order calibration method.

    PubMed

    Chang, Yue-Yue; Wu, Hai-Long; Fang, Huan; Wang, Tong; Liu, Zhi; Ouyang, Yang-Zi; Ding, Yu-Jie; Yu, Ru-Qin

    2018-06-15

    In this study, a smart and green analytical method based on the second-order calibration algorithm coupled with excitation-emission matrix (EEM) fluorescence was developed for the determination of rhodamine dyes illegally added into chilli samples. The proposed method not only has the advantage of high sensitivity over the traditional fluorescence method but also fully displays the "second-order advantage". Pure signals of analytes were successfully extracted from severely interferential EEMs profiles via using alternating trilinear decomposition (ATLD) algorithm even in the presence of common fluorescence problems such as scattering, peak overlaps and unknown interferences. It is worth noting that the unknown interferents can denote different kinds of backgrounds, not only refer to a constant background. In addition, the method using interpolation method could avoid the information loss of analytes of interest. The use of "mathematical separation" instead of complicated "chemical or physical separation" strategy can be more effective and environmentally friendly. A series of statistical parameters including figures of merit and RSDs of intra- (≤1.9%) and inter-day (≤6.6%) were calculated to validate the accuracy of the proposed method. Furthermore, the authoritative method of HPLC-FLD was adopted to verify the qualitative and quantitative results of the proposed method. Compared with the two methods, it also showed that the ATLD-EEMs method has the advantages of accuracy, rapidness, simplicity and green, which is expected to be developed as an attractive alternative method for simultaneous and interference-free determination of rhodamine dyes illegally added into complex matrices. Copyright © 2018. Published by Elsevier B.V.

  14. High-chroma visual cryptography using interference color of high-order retarder films

    NASA Astrophysics Data System (ADS)

    Sugawara, Shiori; Harada, Kenji; Sakai, Daisuke

    2015-08-01

    Visual cryptography can be used as a method of sharing a secret image through several encrypted images. Conventional visual cryptography can display only monochrome images. We have developed a high-chroma color visual encryption technique using the interference color of high-order retarder films. The encrypted films are composed of a polarizing film and retarder films. The retarder films exhibit interference color when they are sandwiched between two polarizing films. We propose a stacking technique for displaying high-chroma interference color images. A prototype visual cryptography device using high-chroma interference color is developed.

  15. A wall interference assessment/correction system

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.; Overby, Glenn; Qian, Cathy X.; Sickles, W. L.; Ulbrich, N.

    1992-01-01

    A Wall Signature method originally developed by Hackett has been selected to be adapted for the Ames 12-ft Wind Tunnel WIAC system in the project. This method uses limited measurements of the static pressure at the wall, in conjunction with the solid wall boundary condition, to determine the strength and distribution of singularities representing the test article. The singularities are used in turn for estimating blockage wall interference. The lifting interference will be treated separately by representing in a horseshoe vortex system for the model's lifting effects. The development and implementation of a working prototype will be completed, delivered and documented with a software manual. The WIAC code will be validated by conducting numerically simulated experiments rather than actual wind tunnel experiments. The simulations will be used to generate both free-air and confined wind-tunnel flow fields for each of the test articles over a range of test configurations. Specifically, the pressure signature at the test section wall will be computed for the tunnel case to provide the simulated 'measured' data. These data will serve as the input for the WIAC method--Wall Signature method. The performance of the WIAC method then may be evaluated by comparing the corrected data with those of the free-air simulation.

  16. Amplification of interference color by using liquid crystal for protein detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Qingdi; Yang, Kun-Lin, E-mail: cheyk@nus.edu.sg

    Micrometer-sized, periodic protein lines printed on a solid surface cause interference color which is invisible to the naked eye. However, the interference color can be amplified by using a thin layer of liquid crystal (LC) covered on the surface to form a phase diffraction grating. Strong interference color can thus be observed under ambient light. By using the LC-amplified interference color, we demonstrate naked-eye detection of a model protein—immunoglobulin G (IgG). Limit of detection can reach 20 μg/ml of IgG without using any instrumentation. This detection method is potentially useful for the development of low-cost and portable biosensors.

  17. A Neural Network-Based Gait Phase Classification Method Using Sensors Equipped on Lower Limb Exoskeleton Robots

    PubMed Central

    Jung, Jun-Young; Heo, Wonho; Yang, Hyundae; Park, Hyunsub

    2015-01-01

    An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases. PMID:26528986

  18. A Neural Network-Based Gait Phase Classification Method Using Sensors Equipped on Lower Limb Exoskeleton Robots.

    PubMed

    Jung, Jun-Young; Heo, Wonho; Yang, Hyundae; Park, Hyunsub

    2015-10-30

    An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases.

  19. BIOTIN INTERFERENCE WITH ROUTINE CLINICAL IMMUNOASSAYS: UNDERSTAND THE CAUSES AND MITIGATE THE RISKS.

    PubMed

    Samarasinghe, Shanika; Meah, Farah; Singh, Vinita; Basit, Arshi; Emanuele, Nicholas; Emanuele, Mary Ann; Mazhari, Alaleh; Holmes, Earle W

    2017-08-01

    The objectives of this report are to review the mechanisms of biotin interference with streptavidin/biotin-based immunoassays, identify automated immunoassay systems vulnerable to biotin interference, describe how to estimate and minimize the risk of biotin interference in vulnerable assays, and review the literature pertaining to biotin interference in endocrine function tests. The data in the manufacturer's "Instructions for Use" for each of the methods utilized by seven immunoassay system were evaluated. We also conducted a systematic search of PubMed/MEDLINE for articles containing terms associated with biotin interference. Available original reports and case series were reviewed. Abstracts from recent scientific meetings were also identified and reviewed. The recent, marked, increase in the use of over-the-counter, high-dose biotin supplements has been accompanied by a steady increase in the number of reports of analytical interference by exogenous biotin in the immunoassays used to evaluate endocrine function. Since immunoassay methods of similar design are also used for the diagnosis and management of anemia, malignancies, autoimmune and infectious diseases, cardiac damage, etc., biotin-related analytical interference is a problem that touches every area of internal medicine. It is important for healthcare personnel to become more aware of immunoassay methods that are vulnerable to biotin interference and to consider biotin supplements as potential sources of falsely increased or decreased test results, especially in cases where a lab result does not correlate with the clinical scenario. FDA = U.S. Food & Drug Administration FT3 = free tri-iodothyronine FT4 = free thyroxine IFUs = instructions for use LH = luteinizing hormone PTH = parathyroid hormone SA/B = streptavidin/biotin TFT = thyroid function test TSH = thyroid-stimulating hormone.

  20. An Efficient Optimization Method for Solving Unsupervised Data Classification Problems.

    PubMed

    Shabanzadeh, Parvaneh; Yusof, Rubiyah

    2015-01-01

    Unsupervised data classification (or clustering) analysis is one of the most useful tools and a descriptive task in data mining that seeks to classify homogeneous groups of objects based on similarity and is used in many medical disciplines and various applications. In general, there is no single algorithm that is suitable for all types of data, conditions, and applications. Each algorithm has its own advantages, limitations, and deficiencies. Hence, research for novel and effective approaches for unsupervised data classification is still active. In this paper a heuristic algorithm, Biogeography-Based Optimization (BBO) algorithm, was adapted for data clustering problems by modifying the main operators of BBO algorithm, which is inspired from the natural biogeography distribution of different species. Similar to other population-based algorithms, BBO algorithm starts with an initial population of candidate solutions to an optimization problem and an objective function that is calculated for them. To evaluate the performance of the proposed algorithm assessment was carried on six medical and real life datasets and was compared with eight well known and recent unsupervised data classification algorithms. Numerical results demonstrate that the proposed evolutionary optimization algorithm is efficient for unsupervised data classification.

  1. Sodium interference in the determination of urinary aldosterone.

    PubMed

    Aldea, Marta Lucía; Barallat, Jaume; Martín, María Amparo; Rosas, Irene; Pastor, María Cruz; Granada, María Luisa

    2016-02-01

    Primary hyperaldosteronism (PHA) is one of the most common endocrine forms of secondary hypertension. Among the most used confirmatory tests for PHA is urinary aldosterone determination after oral sodium loading test. The primary aim of our study was to investigate if sodium concentrations interfere with urinary aldosterone in an automated competitive immunoassay (Liaison®) as well as to verify the manufacturer's specifications. 24-hr urine samples were collected and stored frozen until assayed. Two pools at low and high aldosterone concentrations were prepared. Verification of performance for precision was tested according to Clinical and Laboratory Standards Institute (CLSI) document EP15-A2 and interference with increasing concentrations of NaCl according to CLSI EP7-A2. The assay met the quality specifications according to optimal biological variation. Our results show that sodium concentrations up to 200mmol/L do not interfere on urinary aldosterone quantification, but sodium concentrations above 486mmol/L negatively interfere with the test. The Liaison® automated method is useful for aldosterone determination in the PHA confirmatory test, but interferences with NaCl may occur. It is therefore recommended to determine urinary NaCl before measuring urinary aldosterone to avoid falsely low results. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  2. Image patch-based method for automated classification and detection of focal liver lesions on CT

    NASA Astrophysics Data System (ADS)

    Safdari, Mustafa; Pasari, Raghav; Rubin, Daniel; Greenspan, Hayit

    2013-03-01

    We developed a method for automated classification and detection of liver lesions in CT images based on image patch representation and bag-of-visual-words (BoVW). BoVW analysis has been extensively used in the computer vision domain to analyze scenery images. In the current work we discuss how it can be used for liver lesion classification and detection. The methodology includes building a dictionary for a training set using local descriptors and representing a region in the image using a visual word histogram. Two tasks are described: a classification task, for lesion characterization, and a detection task in which a scan window moves across the image and is determined to be normal liver tissue or a lesion. Data: In the classification task 73 CT images of liver lesions were used, 25 images having cysts, 24 having metastasis and 24 having hemangiomas. A radiologist circumscribed the lesions, creating a region of interest (ROI), in each of the images. He then provided the diagnosis, which was established either by biopsy or clinical follow-up. Thus our data set comprises 73 images and 73 ROIs. In the detection task, a radiologist drew ROIs around each liver lesion and two regions of normal liver, for a total of 159 liver lesion ROIs and 146 normal liver ROIs. The radiologist also demarcated the liver boundary. Results: Classification results of more than 95% were obtained. In the detection task, F1 results obtained is 0.76. Recall is 84%, with precision of 73%. Results show the ability to detect lesions, regardless of shape.

  3. Method and System for Controlling a Dexterous Robot Execution Sequence Using State Classification

    NASA Technical Reports Server (NTRS)

    Sanders, Adam M. (Inventor); Quillin, Nathaniel (Inventor); Platt, Robert J., Jr. (Inventor); Pfeiffer, Joseph (Inventor); Permenter, Frank Noble (Inventor)

    2014-01-01

    A robotic system includes a dexterous robot and a controller. The robot includes a plurality of robotic joints, actuators for moving the joints, and sensors for measuring a characteristic of the joints, and for transmitting the characteristics as sensor signals. The controller receives the sensor signals, and is configured for executing instructions from memory, classifying the sensor signals into distinct classes via the state classification module, monitoring a system state of the robot using the classes, and controlling the robot in the execution of alternative work tasks based on the system state. A method for controlling the robot in the above system includes receiving the signals via the controller, classifying the signals using the state classification module, monitoring the present system state of the robot using the classes, and controlling the robot in the execution of alternative work tasks based on the present system state.

  4. Interference tables: a useful model for interference analysis in asynchronous multicarrier transmission

    NASA Astrophysics Data System (ADS)

    Medjahdi, Yahia; Terré, Michel; Ruyet, Didier Le; Roviras, Daniel

    2014-12-01

    In this paper, we investigate the impact of timing asynchronism on the performance of multicarrier techniques in a spectrum coexistence context. Two multicarrier schemes are considered: cyclic prefix-based orthogonal frequency division multiplexing (CP-OFDM) with a rectangular pulse shape and filter bank-based multicarrier (FBMC) with physical layer for dynamic spectrum access and cognitive radio (PHYDYAS) and isotropic orthogonal transform algorithm (IOTA) waveforms. First, we present the general concept of the so-called power spectral density (PSD)-based interference tables which are commonly used for multicarrier interference characterization in spectrum sharing context. After highlighting the limits of this approach, we propose a new family of interference tables called `instantaneous interference tables'. The proposed tables give the interference power caused by a given interfering subcarrier on a victim one, not only as a function of the spectral distance separating both subcarriers but also with respect to the timing misalignment between the subcarrier holders. In contrast to the PSD-based interference tables, the accuracy of the proposed tables has been validated through different simulation results. Furthermore, due to the better frequency localization of both PHYDYAS and IOTA waveforms, FBMC technique is demonstrated to be more robust to timing asynchronism compared to OFDM one. Such a result makes FBMC a potential candidate for the physical layer of future cognitive radio systems.

  5. Site classification for National Strong Motion Observation Network System (NSMONS) stations in China using an empirical H/V spectral ratio method

    NASA Astrophysics Data System (ADS)

    Ji, Kun; Ren, Yefei; Wen, Ruizhi

    2017-10-01

    Reliable site classification of the stations of the China National Strong Motion Observation Network System (NSMONS) has not yet been assigned because of lacking borehole data. This study used an empirical horizontal-to-vertical (H/V) spectral ratio (hereafter, HVSR) site classification method to overcome this problem. First, according to their borehole data, stations selected from KiK-net in Japan were individually assigned a site class (CL-I, CL-II, or CL-III), which is defined in the Chinese seismic code. Then, the mean HVSR curve for each site class was computed using strong motion recordings captured during the period 1996-2012. These curves were compared with those proposed by Zhao et al. (2006a) for four types of site classes (SC-I, SC-II, SC-III, and SC-IV) defined in the Japanese seismic code (JRA, 1980). It was found that an approximate range of the predominant period Tg could be identified by the predominant peak of the HVSR curve for the CL-I and SC-I sites, CL-II and SC-II sites, and CL-III and SC-III + SC-IV sites. Second, an empirical site classification method was proposed based on comprehensive consideration of peak period, amplitude, and shape of the HVSR curve. The selected stations from KiK-net were classified using the proposed method. The results showed that the success rates of the proposed method in identifying CL-I, CL-II, and CL-III sites were 63%, 64%, and 58% respectively. Finally, the HVSRs of 178 NSMONS stations were computed based on recordings from 2007 to 2015 and the sites classified using the proposed method. The mean HVSR curves were re-calculated for three site classes and compared with those from KiK-net data. It was found that both the peak period and the amplitude were similar for the mean HVSR curves derived from NSMONS classification results and KiK-net borehole data, implying the effectiveness of the proposed method in identifying different site classes. The classification results have good agreement with site classes

  6. Sleep can reduce proactive interference.

    PubMed

    Abel, Magdalena; Bäuml, Karl-Heinz T

    2014-01-01

    Sleep has repeatedly been connected to processes of memory consolidation. While extensive research indeed documents beneficial effects of sleep on memory, little is yet known about the role of sleep for interference effects in episodic memory. Although two prior studies reported sleep to reduce retroactive interference, no sleep effect has previously been found for proactive interference. Here we applied a study format differing from that employed by the prior studies to induce a high degree of proactive interference, and asked participants to encode a single list or two interfering lists of paired associates via pure study cycles. Testing occurred after 12 hours of diurnal wakefulness or nocturnal sleep. Consistent with the prior work, we found sleep in comparison to wake did not affect memory for the single list, but reduced retroactive interference. In addition we found sleep reduced proactive interference, and reduced retroactive and proactive interference to the same extent. The finding is consistent with the view that arising benefits of sleep are caused by the reactivation of memory contents during sleep, which has been suggested to strengthen and stabilise memories. Such stabilisation may make memories less susceptible to competition from interfering memories at test and thus reduce interference effects.

  7. 2,4 - dinitrophenylhydrazine - coated silica gel cartridge method for determination of formaldehyde in air: Identification of an ozone interference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnts, R.R.; Tejada, S.B.

    1989-01-01

    Two versions of the 2,4-dinitrophenylhydrazine method, a coated silica gel cartridge (solid) and acetonitrile impinger (solvent based), were used simultaneously to sample varied concentrations of ozone (0-770 ppb) and formaldehyde (20-140 ppb). Ozone was found to be a negative interference in the determination of formaldehyde by the 2,4-dinitrophenylhydrazine-coated silica gel cartridge method. At 120 ppb of ozone, formaldehyde at 40 ppb was under-reported by the cartridge method by 34% and at 300 ppb of ozone, formaldehyde measurements were 61% low. Greater losses were seen at higher ozone concentrations. Impinger sampling (2,4-DNPH in acetonitrile) showed no formaldehyde losses due to ozone.

  8. Multi-stage classification method oriented to aerial image based on low-rank recovery and multi-feature fusion sparse representation.

    PubMed

    Ma, Xu; Cheng, Yongmei; Hao, Shuai

    2016-12-10

    Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.

  9. Prostate segmentation by sparse representation based classification

    PubMed Central

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-01-01

    Purpose: The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. Methods: To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by

  10. Radio frequency interference mitigation using deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Akeret, J.; Chang, C.; Lucchi, A.; Refregier, A.

    2017-01-01

    We propose a novel approach for mitigating radio frequency interference (RFI) signals in radio data using the latest advances in deep learning. We employ a special type of Convolutional Neural Network, the U-Net, that enables the classification of clean signal and RFI signatures in 2D time-ordered data acquired from a radio telescope. We train and assess the performance of this network using the HIDE &SEEK radio data simulation and processing packages, as well as early Science Verification data acquired with the 7m single-dish telescope at the Bleien Observatory. We find that our U-Net implementation is showing competitive accuracy to classical RFI mitigation algorithms such as SEEK's SUMTHRESHOLD implementation. We publish our U-Net software package on GitHub under GPLv3 license.

  11. Research on Classification of Chinese Text Data Based on SVM

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Yu, Hongzhi; Wan, Fucheng; Xu, Tao

    2017-09-01

    Data Mining has important application value in today’s industry and academia. Text classification is a very important technology in data mining. At present, there are many mature algorithms for text classification. KNN, NB, AB, SVM, decision tree and other classification methods all show good classification performance. Support Vector Machine’ (SVM) classification method is a good classifier in machine learning research. This paper will study the classification effect based on the SVM method in the Chinese text data, and use the support vector machine method in the chinese text to achieve the classify chinese text, and to able to combination of academia and practical application.

  12. A novel collaborative representation and SCAD based classification method for fibrosis and inflammatory activity analysis of chronic hepatitis C

    NASA Astrophysics Data System (ADS)

    Cai, Jiaxin; Chen, Tingting; Li, Yan; Zhu, Nenghui; Qiu, Xuan

    2018-03-01

    In order to analysis the fibrosis stage and inflammatory activity grade of chronic hepatitis C, a novel classification method based on collaborative representation (CR) with smoothly clipped absolute deviation penalty (SCAD) penalty term, called CR-SCAD classifier, is proposed for pattern recognition. After that, an auto-grading system based on CR-SCAD classifier is introduced for the prediction of fibrosis stage and inflammatory activity grade of chronic hepatitis C. The proposed method has been tested on 123 clinical cases of chronic hepatitis C based on serological indexes. Experimental results show that the performance of the proposed method outperforms the state-of-the-art baselines for the classification of fibrosis stage and inflammatory activity grade of chronic hepatitis C.

  13. Polarization ratio property and material classification method in passive millimeter wave polarimetric imaging

    NASA Astrophysics Data System (ADS)

    Cheng, Yayun; Qi, Bo; Liu, Siyuan; Hu, Fei; Gui, Liangqi; Peng, Xiaohui

    2016-10-01

    Polarimetric measurements can provide additional information as compared to unpolarized ones. In this paper, linear polarization ratio (LPR) is created to be a feature discriminator. The LPR properties of several materials are investigated using Fresnel theory. The theoretical results show that LPR is sensitive to the material type (metal or dielectric). Then a linear polarization ratio-based (LPR-based) method is presented to distinguish between metal and dielectric materials. In order to apply this method to practical applications, the optimal range of incident angle have been discussed. The typical outdoor experiments including various objects such as aluminum plate, grass, concrete, soil and wood, have been conducted to validate the presented classification method.

  14. Delaying Interference Training Has Equivalent Effects in Various Pavlovian Interference Paradigms

    ERIC Educational Resources Information Center

    Powell, Elizabeth J.; Escobar, Martha; Kimble, Whitney

    2013-01-01

    Spontaneous recovery in extinction appears to be inversely related to the acquisition-to-extinction interval, but it remains unclear why this is the case. Rat subjects trained with one of three interference paradigms exhibited less spontaneous recovery of the original response after delayed than immediate interference, regardless of whether…

  15. An information-based network approach for protein classification

    PubMed Central

    Wan, Xiaogeng; Zhao, Xin; Yau, Stephen S. T.

    2017-01-01

    Protein classification is one of the critical problems in bioinformatics. Early studies used geometric distances and polygenetic-tree to classify proteins. These methods use binary trees to present protein classification. In this paper, we propose a new protein classification method, whereby theories of information and networks are used to classify the multivariate relationships of proteins. In this study, protein universe is modeled as an undirected network, where proteins are classified according to their connections. Our method is unsupervised, multivariate, and alignment-free. It can be applied to the classification of both protein sequences and structures. Nine examples are used to demonstrate the efficiency of our new method. PMID:28350835

  16. High-dose biotin therapy leading to false biochemical endocrine profiles: validation of a simple method to overcome biotin interference.

    PubMed

    Piketty, Marie-Liesse; Prie, Dominique; Sedel, Frederic; Bernard, Delphine; Hercend, Claude; Chanson, Philippe; Souberbielle, Jean-Claude

    2017-05-01

    High-dose biotin therapy is beneficial in progressive multiple sclerosis (MS) and is expected to be adopted by a large number of patients. Biotin therapy leads to analytical interference in many immunoassays that utilize streptavidin-biotin capture techniques, yielding skewed results that can mimic various endocrine disorders. We aimed at exploring this interference, to be able to remove biotin and avoid misleading results. We measured free triiodothyronine (fT3), free thyroxine (fT4), thyroid-stimulating hormone (TSH), parathyroid homrone (PTH), 25-hydroxyvitamin D (25OHD), follicle-stimulating hormone (FSH), luteinizing hormone (LH), prolactin, C-peptide, cortisol (Roche Diagnostics assays), biotin and its main metabolites (liquid chromatography tandem mass spectrometry) in 23 plasmas from MS patients and healthy volunteers receiving high-dose biotin, and in 39 biotin-unsupplemented patients, before and after a simple procedure (designated N5) designed to remove biotin by means of streptavidin-coated microparticles. We also assayed fT4, TSH and PTH in the 23 high-biotin plasmas using assays not employing streptavidin-biotin binding. The biotin concentration ranged from 31.7 to 1160 µg/L in the 23 high-biotin plasmas samples. After the N5 protocol, the biotin concentration was below the detection limit in all but two samples (8.3 and 27.6 μg/L). Most hormones results were abnormal, but normalized after N5. All results with the alternative methods were normal except two slight PTH elevations. In the 39 biotin-unsupplemented patients, the N5 protocol did not affect the results for any of the hormones, apart from an 8.4% decrease in PTH. We confirm that most streptavidin-biotin hormone immunoassays are affected by high biotin concentrations, leading to a risk of misdiagnosis. Our simple neutralization method efficiently suppresses biotin interference.

  17. Risk-based prioritization method for the classification of groundwater pesticide pollution from agricultural regions.

    PubMed

    Yang, Yu; Lian, Xin-Ying; Jiang, Yong-Hai; Xi, Bei-Dou; He, Xiao-Song

    2017-11-01

    Agricultural regions are a significant source of groundwater pesticide pollution. To ensure that agricultural regions with a significantly high risk of groundwater pesticide contamination are properly managed, a risk-based ranking method related to groundwater pesticide contamination is needed. In the present paper, a risk-based prioritization method for the classification of groundwater pesticide pollution from agricultural regions was established. The method encompasses 3 phases, including indicator selection, characterization, and classification. In the risk ranking index system employed here, 17 indicators involving the physicochemical properties, environmental behavior characteristics, pesticide application methods, and inherent vulnerability of groundwater in the agricultural region were selected. The boundary of each indicator was determined using K-means cluster analysis based on a survey of a typical agricultural region and the physical and chemical properties of 300 typical pesticides. The total risk characterization was calculated by multiplying the risk value of each indicator, which could effectively avoid the subjectivity of index weight calculation and identify the main factors associated with the risk. The results indicated that the risk for groundwater pesticide contamination from agriculture in a region could be ranked into 4 classes from low to high risk. This method was applied to an agricultural region in Jiangsu Province, China, and it showed that this region had a relatively high risk for groundwater contamination from pesticides, and that the pesticide application method was the primary factor contributing to the relatively high risk. The risk ranking method was determined to be feasible, valid, and able to provide reference data related to the risk management of groundwater pesticide pollution from agricultural regions. Integr Environ Assess Manag 2017;13:1052-1059. © 2017 SETAC. © 2017 SETAC.

  18. SAPT units turn-on in an interference-dominant environment. [Stand Alone Pressure Transducer

    NASA Technical Reports Server (NTRS)

    Peng, W.-C.; Yang, C.-C.; Lichtenberg, C.

    1990-01-01

    A stand alone pressure transducer (SAPT) is a credit-card-sized smart pressure sensor inserted between the tile and the aluminum skin of a space shuttle. Reliably initiating the SAPT units via RF signals in a prelaunch environment is a challenging problem. Multiple-source interference may exist if more than one GSE (ground support equipment) antenna is turned on at the same time to meet the simultaneity requirement of 10 ms. A polygon model for orbiter, external tank, solid rocket booster, and tail service masts is used to simulate the prelaunch environment. Geometric optics is then applied to identify the coverage areas and the areas which are vulnerable to multipath and/or multiple-source interference. Simulation results show that the underside areas of an orbiter have incidence angles exceeding 80 deg. For multipath interference, both sides of the cargo bay areas are found to be vulnerable to a worst-case multipath loss exceeding 20 dB. Multiple-source interference areas are also identified. Mitigation methods for the coverage and interference problem are described. It is shown that multiple-source interference can be eliminated (or controlled) using the time-division-multiplexing method or the time-stamp approach.

  19. SB certification handout material requirements, test methods, responsibilities, and minimum classification levels for mixture-based specification for flexible base.

    DOT National Transportation Integrated Search

    2012-10-01

    A handout with tables representing the material requirements, test methods, responsibilities, and minimum classification levels mixture-based specification for flexible base and details on aggregate and test methods employed, along with agency and co...

  20. Water toxicity monitoring using Vibrio fischeri: a method free of interferences from colour and turbidity.

    PubMed

    Faria, Elsa Correia; Treves Brown, Bernard J; Snook, Richard D

    2004-02-01

    In this paper the kinetic method for the determination of toxicity using Vibrio fischeri is described and suggested as a potential method for the continuous screening of wastewater toxicity. The kinetic method was demonstrated to be free from interferences due to colour and turbidity normally observed when testing wastewater samples with this organism. This is of great importance for the application of the method to remote toxicity screening of wastewaters. The effect of colour, investigated using 50 ppm Zn(2+) solutions containing the food-dye tropaeolin O, and the effect of turbidity, investigated using 50 ppm Zn(2+) solutions containing white optically reflective and coloured optically absorbing polystyrene beads, is reported. It was also found that the design of the light detection system of the instrument ensures efficient collection of the light scattered by particles in the sample, which enables a greater range of turbid samples to be tested. In addition the natural light decay was found to be negligible during the duration of a 10 min test and thus one channel would be enough to carry out the tests. This would mean halving the quantity of bacterial reagent used and reducing the cost of the tests.

  1. Sedimentation Velocity Analysis of Large Oligomeric Chromatin Complexes Using Interference Detection.

    PubMed

    Rogge, Ryan A; Hansen, Jeffrey C

    2015-01-01

    Sedimentation velocity experiments measure the transport of molecules in solution under centrifugal force. Here, we describe a method for monitoring the sedimentation of very large biological molecular assemblies using the interference optical systems of the analytical ultracentrifuge. The mass, partial-specific volume, and shape of macromolecules in solution affect their sedimentation rates as reflected in the sedimentation coefficient. The sedimentation coefficient is obtained by measuring the solute concentration as a function of radial distance during centrifugation. Monitoring the concentration can be accomplished using interference optics, absorbance optics, or the fluorescence detection system, each with inherent advantages. The interference optical system captures data much faster than these other optical systems, allowing for sedimentation velocity analysis of extremely large macromolecular complexes that sediment rapidly at very low rotor speeds. Supramolecular oligomeric complexes produced by self-association of 12-mer chromatin fibers are used to illustrate the advantages of the interference optics. Using interference optics, we show that chromatin fibers self-associate at physiological divalent salt concentrations to form structures that sediment between 10,000 and 350,000S. The method for characterizing chromatin oligomers described in this chapter will be generally useful for characterization of any biological structures that are too large to be studied by the absorbance optical system. © 2015 Elsevier Inc. All rights reserved.

  2. Constrained Bayesian Active Learning of Interference Channels in Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    Tsakmalis, Anestis; Chatzinotas, Symeon; Ottersten, Bjorn

    2018-02-01

    In this paper, a sequential probing method for interference constraint learning is proposed to allow a centralized Cognitive Radio Network (CRN) accessing the frequency band of a Primary User (PU) in an underlay cognitive scenario with a designed PU protection specification. The main idea is that the CRN probes the PU and subsequently eavesdrops the reverse PU link to acquire the binary ACK/NACK packet. This feedback indicates whether the probing-induced interference is harmful or not and can be used to learn the PU interference constraint. The cognitive part of this sequential probing process is the selection of the power levels of the Secondary Users (SUs) which aims to learn the PU interference constraint with a minimum number of probing attempts while setting a limit on the number of harmful probing-induced interference events or equivalently of NACK packet observations over a time window. This constrained design problem is studied within the Active Learning (AL) framework and an optimal solution is derived and implemented with a sophisticated, accurate and fast Bayesian Learning method, the Expectation Propagation (EP). The performance of this solution is also demonstrated through numerical simulations and compared with modified versions of AL techniques we developed in earlier work.

  3. CrossLink: a novel method for cross-condition classification of cancer subtypes.

    PubMed

    Ma, Chifeng; Sastry, Konduru S; Flore, Mario; Gehani, Salah; Al-Bozom, Issam; Feng, Yusheng; Serpedin, Erchin; Chouchane, Lotfi; Chen, Yidong; Huang, Yufei

    2016-08-22

    We considered the prediction of cancer classes (e.g. subtypes) using patient gene expression profiles that contain both systematic and condition-specific biases when compared with the training reference dataset. The conventional normalization-based approaches cannot guarantee that the gene signatures in the reference and prediction datasets always have the same distribution for all different conditions as the class-specific gene signatures change with the condition. Therefore, the trained classifier would work well under one condition but not under another. To address the problem of current normalization approaches, we propose a novel algorithm called CrossLink (CL). CL recognizes that there is no universal, condition-independent normalization mapping of signatures. In contrast, it exploits the fact that the signature is unique to its associated class under any condition and thus employs an unsupervised clustering algorithm to discover this unique signature. We assessed the performance of CL for cross-condition predictions of PAM50 subtypes of breast cancer by using a simulated dataset modeled after TCGA BRCA tumor samples with a cross-validation scheme, and datasets with known and unknown PAM50 classification. CL achieved prediction accuracy >73 %, highest among other methods we evaluated. We also applied the algorithm to a set of breast cancer tumors derived from Arabic population to assign a PAM50 classification to each tumor based on their gene expression profiles. A novel algorithm CrossLink for cross-condition prediction of cancer classes was proposed. In all test datasets, CL showed robust and consistent improvement in prediction performance over other state-of-the-art normalization and classification algorithms.

  4. Microwave Photonic Filters for Interference Cancellation and Adaptive Beamforming

    NASA Astrophysics Data System (ADS)

    Chang, John

    Wireless communication has experienced an explosion of growth, especially in the past half- decade, due to the ubiquity of wireless devices, such as tablets, WiFi-enabled devices, and especially smartphones. Proliferation of smartphones with powerful processors and graphic chips have given an increasing amount of people the ability to access anything from anywhere. Unfortunately, this ease of access has greatly increased mobile wireless bandwidth and have begun to stress carrier networks and spectra. Wireless interference cancellation will play a big role alongside the popularity of wire- less communication. In this thesis, we will investigate optical signal processing methods for wireless interference cancellation methods. Optics provide the perfect backdrop for interference cancellation. Mobile wireless data is already aggregated and transported through fiber backhaul networks in practice. By sandwiching the signal processing stage between the receiver and the fiber backhaul, processing can easily be done locally in one location. Further, optics offers the advantages of being instantaneously broadband and size, weight, and power (SWAP). We are primarily concerned with two methods for interference cancellation, based on microwave photonic filters, in this thesis. The first application is for a co-channel situation, in which a transmitter and receiver are co-located and transmitting at the same frequency. A novel analog optical technique extended for multipath interference cancellation of broadband signals is proposed and experimentally demonstrated in this thesis. The proposed architecture was able to achieve a maximum of 40 dB of cancellation over 200 MHz and 50 dB of cancellation over 10 MHz. The broadband nature of the cancellation, along with its depth, demonstrates both the precision of the optical components and the validity of the architecture. Next, we are interested in a scenario with dynamically changing interference, which requires an adaptive photonic

  5. Residual interference and wind tunnel wall adaption

    NASA Technical Reports Server (NTRS)

    Mokry, Miroslav

    1989-01-01

    Measured flow variables near the test section boundaries, used to guide adjustments of the walls in adaptive wind tunnels, can also be used to quantify the residual interference. Because of a finite number of wall control devices (jacks, plenum compartments), the finite test section length, and the approximation character of adaptation algorithms, the unconfined flow conditions are not expected to be precisely attained even in the fully adapted stage. The procedures for the evaluation of residual wall interference are essentially the same as those used for assessing the correction in conventional, non-adaptive wind tunnels. Depending upon the number of flow variables utilized, one can speak of one- or two-variable methods; in two dimensions also of Schwarz- or Cauchy-type methods. The one-variable methods use the measured static pressure and normal velocity at the test section boundary, but do not require any model representation. This is clearly of an advantage for adaptive wall test section, which are often relatively small with respect to the test model, and for the variety of complex flows commonly encountered in wind tunnel testing. For test sections with flexible walls the normal component of velocity is given by the shape of the wall, adjusted for the displacement effect of its boundary layer. For ventilated test section walls it has to be measured by the Calspan pipes, laser Doppler velocimetry, or other appropriate techniques. The interface discontinuity method, also described, is a genuine residual interference assessment technique. It is specific to adaptive wall wind tunnels, where the computation results for the fictitious flow in the exterior of the test section are provided.

  6. Prostate segmentation by sparse representation based classification.

    PubMed

    Gao, Yaozong; Liao, Shu; Shen, Dinggang

    2012-10-01

    The segmentation of prostate in CT images is of essential importance to external beam radiotherapy, which is one of the major treatments for prostate cancer nowadays. During the radiotherapy, the prostate is radiated by high-energy x rays from different directions. In order to maximize the dose to the cancer and minimize the dose to the surrounding healthy tissues (e.g., bladder and rectum), the prostate in the new treatment image needs to be accurately localized. Therefore, the effectiveness and efficiency of external beam radiotherapy highly depend on the accurate localization of the prostate. However, due to the low contrast of the prostate with its surrounding tissues (e.g., bladder), the unpredicted prostate motion, and the large appearance variations across different treatment days, it is challenging to segment the prostate in CT images. In this paper, the authors present a novel classification based segmentation method to address these problems. To segment the prostate, the proposed method first uses sparse representation based classification (SRC) to enhance the prostate in CT images by pixel-wise classification, in order to overcome the limitation of poor contrast of the prostate images. Then, based on the classification results, previous segmented prostates of the same patient are used as patient-specific atlases to align onto the current treatment image and the majority voting strategy is finally adopted to segment the prostate. In order to address the limitations of the traditional SRC in pixel-wise classification, especially for the purpose of segmentation, the authors extend SRC from the following four aspects: (1) A discriminant subdictionary learning method is proposed to learn a discriminant and compact representation of training samples for each class so that the discriminant power of SRC can be increased and also SRC can be applied to the large-scale pixel-wise classification. (2) The L1 regularized sparse coding is replaced by the elastic net in

  7. Discovery, identification and mitigation of isobaric sulfate metabolite interference to a phosphate prodrug in LC-MS/MS bioanalysis: Critical role of method development in ensuring assay quality.

    PubMed

    Yuan, Long; Ji, Qin C

    2018-06-05

    Metabolite interferences represent a major risk of inaccurate quantification when using LC-MS/MS bioanalytical assays. During LC-MS/MS bioanalysis of BMS-919194, a phosphate ester prodrug, in plasma samples from rat and monkey GLP toxicology studies, an unknown peak was detected in the MRM channel of the prodrug. This peak was not observed in previous discovery toxicology studies, in which a fast gradient LC-MS/MS method was used. We found out that this unknown peak would co-elute with the prodrug peak when the discovery method was used, therefore, causing significant overestimation of the exposure of the prodrug in the discovery toxicology studies. To understand the nature of this interfering peak and its impact to bioanalytical assay, we further investigated its formation and identification. The interfering compound and the prodrug were found to be isobaric and to have the same major product ions in electrospray ionization positive mode, thus, could not be differentiated using a triple quadrupole mass spectrometer. By using high-resolution mass spectrometry (HRMS), the interfering metabolite was successfully identified to be an isobaric sulfate metabolite of BMS-919194. To the best of our knowledge, this is the first report that a phosphate prodrug was metabolized in vivo to an isobaric sulfate metabolite, and this metabolite caused significant interference to the analysis of the prodrug. This work demonstrated the presence of the interference risk from isobaric sulfate metabolites to the bioanalysis of phosphate prodrugs in real samples. It is critical to evaluate and mitigate potential metabolite interferences during method development, therefore, minimize the related bioanalytical risks and ensure assay quality. Our work also showed the unique advantages of HRMS in identifying potential metabolite interference during LC-MS/MS bioanalysis. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Design of Passive Power Filter for Hybrid Series Active Power Filter using Estimation, Detection and Classification Method

    NASA Astrophysics Data System (ADS)

    Swain, Sushree Diptimayee; Ray, Pravat Kumar; Mohanty, K. B.

    2016-06-01

    This research paper discover the design of a shunt Passive Power Filter (PPF) in Hybrid Series Active Power Filter (HSAPF) that employs a novel analytic methodology which is superior than FFT analysis. This novel approach consists of the estimation, detection and classification of the signals. The proposed method is applied to estimate, detect and classify the power quality (PQ) disturbance such as harmonics. This proposed work deals with three methods: the harmonic detection through wavelet transform method, the harmonic estimation by Kalman Filter algorithm and harmonic classification by decision tree method. From different type of mother wavelets in wavelet transform method, the db8 is selected as suitable mother wavelet because of its potency on transient response and crouched oscillation at frequency domain. In harmonic compensation process, the detected harmonic is compensated through Hybrid Series Active Power Filter (HSAPF) based on Instantaneous Reactive Power Theory (IRPT). The efficacy of the proposed method is verified in MATLAB/SIMULINK domain and as well as with an experimental set up. The obtained results confirm the superiority of the proposed methodology than FFT analysis. This newly proposed PPF is used to make the conventional HSAPF more robust and stable.

  9. Grounded Classification: Grounded Theory and Faceted Classification.

    ERIC Educational Resources Information Center

    Star, Susan Leigh

    1998-01-01

    Compares the qualitative method of grounded theory (GT) with Ranganathan's construction of faceted classifications (FC) in library and information science. Both struggle with a core problem--the representation of vernacular words and processes, empirically discovered, which will, although ethnographically faithful, be powerful beyond the single…

  10. Systemic Sclerosis Classification Criteria: Developing methods for multi-criteria decision analysis with 1000Minds

    PubMed Central

    Johnson, Sindhu R.; Naden, Raymond P.; Fransen, Jaap; van den Hoogen, Frank; Pope, Janet E.; Baron, Murray; Tyndall, Alan; Matucci-Cerinic, Marco; Denton, Christopher P.; Distler, Oliver; Gabrielli, Armando; van Laar, Jacob M.; Mayes, Maureen; Steen, Virginia; Seibold, James R.; Clements, Phillip; Medsger, Thomas A.; Carreira, Patricia E.; Riemekasten, Gabriela; Chung, Lorinda; Fessler, Barri J.; Merkel, Peter A.; Silver, Richard; Varga, John; Allanore, Yannick; Mueller-Ladner, Ulf; Vonk, Madelon C.; Walker, Ulrich A.; Cappelli, Susanna; Khanna, Dinesh

    2014-01-01

    Objective Classification criteria for systemic sclerosis (SSc) are being developed. The objectives were to: develop an instrument for collating case-data and evaluate its sensibility; use forced-choice methods to reduce and weight criteria; and explore agreement between experts on the probability that cases were classified as SSc. Study Design and Setting A standardized instrument was tested for sensibility. The instrument was applied to 20 cases covering a range of probabilities that each had SSc. Experts rank-ordered cases from highest to lowest probability; reduced and weighted the criteria using forced-choice methods; and re-ranked the cases. Consistency in rankings was evaluated using intraclass correlation coefficients (ICC). Results Experts endorsed clarity (83%), comprehensibility (100%), face and content validity (100%). Criteria were weighted (points): finger skin thickening (14–22), finger-tip lesions (9–21), friction rubs (21), finger flexion contractures (16), pulmonary fibrosis (14), SSc-related antibodies (15), Raynaud’s phenomenon (13), calcinosis (12), pulmonary hypertension (11), renal crisis (11), telangiectasia (10), abnormal nailfold capillaries (10), esophageal dilation (7) and puffy fingers (5). The ICC across experts was 0.73 (95%CI 0.58,0.86) and improved to 0.80 (95%CI 0.68,0.90). Conclusions Using a sensible instrument and forced-choice methods, the number of criteria were reduced by 39% (23 to 14) and weighted. Our methods reflect the rigors of measurement science, and serves as a template for developing classification criteria. PMID:24721558

  11. Dermal and inhalation acute toxic class methods: test procedures and biometric evaluations for the Globally Harmonized Classification System.

    PubMed

    Holzhütter, H G; Genschow, E; Diener, W; Schlede, E

    2003-05-01

    The acute toxic class (ATC) methods were developed for determining LD(50)/LC(50) estimates of chemical substances with significantly fewer animals than needed when applying conventional LD(50)/LC(50) tests. The ATC methods are sequential stepwise procedures with fixed starting doses/concentrations and a maximum of six animals used per dose/concentration. The numbers of dead/moribund animals determine whether further testing is necessary or whether the test is terminated. In recent years we have developed classification procedures for the oral, dermal and inhalation routes of administration by using biometric methods. The biometric approach assumes a probit model for the mortality probability of a single animal and assigns the chemical to that toxicity class for which the best concordance is achieved between the statistically expected and the observed numbers of dead/moribund animals at the various steps of the test procedure. In previous publications we have demonstrated the validity of the biometric ATC methods on the basis of data obtained for the oral ATC method in two-animal ring studies with 15 participants from six countries. Although the test procedures and biometric evaluations for the dermal and inhalation ATC methods have already been published, there was a need for an adaptation of the classification schemes to the starting doses/concentrations of the Globally Harmonized Classification System (GHS) recently adopted by the Organization for Economic Co-operation and Development (OECD). Here we present the biometric evaluation of the dermal and inhalation ATC methods for the starting doses/concentrations of the GHS and of some other international classification systems still in use. We have developed new test procedures and decision rules for the dermal and inhalation ATC methods, which require significantly fewer animals to provide predictions of toxicity classes, that are equally good or even better than those achieved by using the conventional LD(50)/LC

  12. Vegetation Monitoring of Mashhad Using AN Object-Oriented POST Classification Comparison Method

    NASA Astrophysics Data System (ADS)

    Khalili Moghadam, N.; Delavar, M. R.; Forati, A.

    2017-09-01

    By and large, todays mega cities are confronting considerable urban development in which many new buildings are being constructed in fringe areas of these cities. This remarkable urban development will probably end in vegetation reduction even though each mega city requires adequate areas of vegetation, which is considered to be crucial and helpful for these cities from a wide variety of perspectives such as air pollution reduction, soil erosion prevention, and eco system as well as environmental protection. One of the optimum methods for monitoring this vital component of each city is multi-temporal satellite images acquisition and using change detection techniques. In this research, the vegetation and urban changes of Mashhad, Iran, were monitored using an object-oriented (marker-based watershed algorithm) post classification comparison (PCC) method. A Bi-temporal multi-spectral Landsat satellite image was used from the study area to detect the changes of urban and vegetation areas and to find a relation between these changes. The results of this research demonstrate that during 1987-2017, Mashhad urban area has increased about 22525 hectares and the vegetation area has decreased approximately 4903 hectares. These statistics substantiate the close relationship between urban development and vegetation reduction. Moreover, the overall accuracies of 85.5% and 91.2% were achieved for the first and the second image classification, respectively. In addition, the overall accuracy and kappa coefficient of change detection were assessed 84.1% and 70.3%, respectively.

  13. Interference and deception detection technology of satellite navigation based on deep learning

    NASA Astrophysics Data System (ADS)

    Chen, Weiyi; Deng, Pingke; Qu, Yi; Zhang, Xiaoguang; Li, Yaping

    2017-10-01

    Satellite navigation system plays an important role in people's daily life and war. The strategic position of satellite navigation system is prominent, so it is very important to ensure that the satellite navigation system is not disturbed or destroyed. It is a critical means to detect the jamming signal to avoid the accident in a navigation system. At present, the detection technology of jamming signal in satellite navigation system is not intelligent , mainly relying on artificial decision and experience. For this issue, the paper proposes a method based on deep learning to monitor the interference source in a satellite navigation. By training the interference signal data, and extracting the features of the interference signal, the detection sys tem model is constructed. The simulation results show that, the detection accuracy of our detection system can reach nearly 70%. The method in our paper provides a new idea for the research on intelligent detection of interference and deception signal in a satellite navigation system.

  14. Iris Image Classification Based on Hierarchical Visual Codebook.

    PubMed

    Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang

    2014-06-01

    Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.

  15. A Spacecraft Electrical Characteristics Multi-Label Classification Method Based on Off-Line FCM Clustering and On-Line WPSVM

    PubMed Central

    Li, Ke; Liu, Yi; Wang, Quanxin; Wu, Yalei; Song, Shimin; Sun, Yi; Liu, Tengchong; Wang, Jun; Li, Yang; Du, Shaoyi

    2015-01-01

    This paper proposes a novel multi-label classification method for resolving the spacecraft electrical characteristics problems which involve many unlabeled test data processing, high-dimensional features, long computing time and identification of slow rate. Firstly, both the fuzzy c-means (FCM) offline clustering and the principal component feature extraction algorithms are applied for the feature selection process. Secondly, the approximate weighted proximal support vector machine (WPSVM) online classification algorithms is used to reduce the feature dimension and further improve the rate of recognition for electrical characteristics spacecraft. Finally, the data capture contribution method by using thresholds is proposed to guarantee the validity and consistency of the data selection. The experimental results indicate that the method proposed can obtain better data features of the spacecraft electrical characteristics, improve the accuracy of identification and shorten the computing time effectively. PMID:26544549

  16. D-METHIONINE REDUCES TOBRAMYCIN-INDUCED OTOTOXICITY WITHOUT ANTIMICROBIAL INTERFERENCE IN ANIMAL MODELS

    PubMed Central

    Fox, Daniel J.; Cooper, Morris D.; Speil, Cristian A.; Roberts, Melissa H.; Yanik, Susan C.; Meech, Robert P.; Hargrove, Tim L.; Verhulst, Steven J.; Rybak, Leonard P.; Campbell, Kathleen C. M.

    2015-01-01

    Background Tobramycin is a critical cystic fibrosis treatment however it causes ototoxicity. This study tested D-methionine protection from tobramycin-induced ototoxicity and potential antimicrobial interference. Methods Auditory brainstem responses (ABR) and outer hair cell (OHC) quantifications measured protection in guinea pigs treated with tobramycin and a range of D-methionine doses. In vitro antimicrobial interference studies tested inhibition and post antibiotic effect assays. In vivo antimicrobial interference studies tested normal and neutropenic E. coli murine survival and intraperitoneal lavage bacterial counts. Results D-methionine conferred significant ABR threshold shift reductions. OHC protection was less robust but significant at 20 kHz in the 420 mg/kg/day group. In vitro studies did not detect D-methionine-induced antimicrobial interference. In vivo studies did not detect D-methionine-induced interference in normal or neutropenic mice. Conclusions D-methionine protects from tobramycin-induced ototoxicity without antimicrobial interference. The study results suggest D-met as a potential otoprotectant from clinical tobramycin use in cystic fibrosis patients. PMID:26166286

  17. Two techniques for eliminating luminol interference material and flow system configurations for luminol and firefly luciferase systems

    NASA Technical Reports Server (NTRS)

    Thomas, R. R.

    1976-01-01

    Two methods for eliminating luminol interference materials are described. One method eliminates interference from organic material by pre-reacting a sample with dilute hydrogen peroxide. The reaction rate resolution method for eliminating inorganic forms of interference is also described. The combination of the two methods makes the luminol system more specific for bacteria. Flow system designs for both the firefly luciferase and luminol bacteria detection systems are described. The firefly luciferase flow system incorporating nitric acid extraction and optimal dilutions has a functional sensitivity of 3 x 100,000 E. coli/ml. The luminol flow system incorporates the hydrogen peroxide pretreatment and the reaction rate resolution techniques for eliminating interference. The functional sensitivity of the luminol flow system is 1 x 10,000 E. coli/ml.

  18. Quantitative phase imaging of biological cells and tissues using singleshot white light interference microscopy and phase subtraction method for extended range of measurement

    NASA Astrophysics Data System (ADS)

    Mehta, Dalip Singh; Sharma, Anuradha; Dubey, Vishesh; Singh, Veena; Ahmad, Azeem

    2016-03-01

    We present a single-shot white light interference microscopy for the quantitative phase imaging (QPI) of biological cells and tissues. A common path white light interference microscope is developed and colorful white light interferogram is recorded by three-chip color CCD camera. The recorded white light interferogram is decomposed into the red, green and blue color wavelength component interferograms and processed it to find out the RI for different color wavelengths. The decomposed interferograms are analyzed using local model fitting (LMF)" algorithm developed for reconstructing the phase map from single interferogram. LMF is slightly off-axis interferometric QPI method which is a single-shot method that employs only a single image, so it is fast and accurate. The present method is very useful for dynamic process where path-length changes at millisecond level. From the single interferogram a wavelength-dependent quantitative phase imaging of human red blood cells (RBCs) are reconstructed and refractive index is determined. The LMF algorithm is simple to implement and is efficient in computation. The results are compared with the conventional phase shifting interferometry and Hilbert transform techniques.

  19. Behavior Based Social Dimensions Extraction for Multi-Label Classification

    PubMed Central

    Li, Le; Xu, Junyi; Xiao, Weidong; Ge, Bin

    2016-01-01

    Classification based on social dimensions is commonly used to handle the multi-label classification task in heterogeneous networks. However, traditional methods, which mostly rely on the community detection algorithms to extract the latent social dimensions, produce unsatisfactory performance when community detection algorithms fail. In this paper, we propose a novel behavior based social dimensions extraction method to improve the classification performance in multi-label heterogeneous networks. In our method, nodes’ behavior features, instead of community memberships, are used to extract social dimensions. By introducing Latent Dirichlet Allocation (LDA) to model the network generation process, nodes’ connection behaviors with different communities can be extracted accurately, which are applied as latent social dimensions for classification. Experiments on various public datasets reveal that the proposed method can obtain satisfactory classification results in comparison to other state-of-the-art methods on smaller social dimensions. PMID:27049849

  20. Multiple Sparse Representations Classification

    PubMed Central

    Plenge, Esben; Klein, Stefan S.; Niessen, Wiro J.; Meijering, Erik

    2015-01-01

    Sparse representations classification (SRC) is a powerful technique for pixelwise classification of images and it is increasingly being used for a wide variety of image analysis tasks. The method uses sparse representation and learned redundant dictionaries to classify image pixels. In this empirical study we propose to further leverage the redundancy of the learned dictionaries to achieve a more accurate classifier. In conventional SRC, each image pixel is associated with a small patch surrounding it. Using these patches, a dictionary is trained for each class in a supervised fashion. Commonly, redundant/overcomplete dictionaries are trained and image patches are sparsely represented by a linear combination of only a few of the dictionary elements. Given a set of trained dictionaries, a new patch is sparse coded using each of them, and subsequently assigned to the class whose dictionary yields the minimum residual energy. We propose a generalization of this scheme. The method, which we call multiple sparse representations classification (mSRC), is based on the observation that an overcomplete, class specific dictionary is capable of generating multiple accurate and independent estimates of a patch belonging to the class. So instead of finding a single sparse representation of a patch for each dictionary, we find multiple, and the corresponding residual energies provides an enhanced statistic which is used to improve classification. We demonstrate the efficacy of mSRC for three example applications: pixelwise classification of texture images, lumen segmentation in carotid artery magnetic resonance imaging (MRI), and bifurcation point detection in carotid artery MRI. We compare our method with conventional SRC, K-nearest neighbor, and support vector machine classifiers. The results show that mSRC outperforms SRC and the other reference methods. In addition, we present an extensive evaluation of the effect of the main mSRC parameters: patch size, dictionary size, and

  1. Object-oriented and pixel-based classification approach for land cover using airborne long-wave infrared hyperspectral data

    NASA Astrophysics Data System (ADS)

    Marwaha, Richa; Kumar, Anil; Kumar, Arumugam Senthil

    2015-01-01

    Our primary objective was to explore a classification algorithm for thermal hyperspectral data. Minimum noise fraction is applied to thermal hyperspectral data and eight pixel-based classifiers, i.e., constrained energy minimization, matched filter, spectral angle mapper (SAM), adaptive coherence estimator, orthogonal subspace projection, mixture-tuned matched filter, target-constrained interference-minimized filter, and mixture-tuned target-constrained interference minimized filter are tested. The long-wave infrared (LWIR) has not yet been exploited for classification purposes. The LWIR data contain emissivity and temperature information about an object. A highest overall accuracy of 90.99% was obtained using the SAM algorithm for the combination of thermal data with a colored digital photograph. Similarly, an object-oriented approach is applied to thermal data. The image is segmented into meaningful objects based on properties such as geometry, length, etc., which are grouped into pixels using a watershed algorithm and an applied supervised classification algorithm, i.e., support vector machine (SVM). The best algorithm in the pixel-based category is the SAM technique. SVM is useful for thermal data, providing a high accuracy of 80.00% at a scale value of 83 and a merge value of 90, whereas for the combination of thermal data with a colored digital photograph, SVM gives the highest accuracy of 85.71% at a scale value of 82 and a merge value of 90.

  2. A Re-evaluation of the Ferrozine Method for Dissolved Iron: The Effect of Organic Interferences

    NASA Astrophysics Data System (ADS)

    Balind, K.; Barber, A.; Gelinas, Y.

    2016-12-01

    Among the most commonly used analytical methods in geochemistry is the ferrozine method for determining dissolved iron concentration in water (1). This cheap and easy-to-use spectrophotometric method involves a complexing agent (ferrozine), a reducing agent (hydroxylamine-HCl) and buffer (ammonium acetate with ammonium hydroxide). Previous studies have demonstrated that complex organic matter (OM) originating from the Suwannee River did not lead to a significantly underestimation of the measured iron content in OM amended iron solutions (2). The authors concluded that this method could be used even in organic rich (i.e., 25 mg/L) waters. Here we compare the concentration of Fe measured using this spectrophotometric method to the total Fe as measured by ICP-MS in the presence/absence of specific organic molecules to ascertain if they interfere with the ferrozine method. We show that certain molecules with hydroxyl and carboxyl functional groups as well as multi-dentate chelating species have a significant effect on the measured iron concentrations. Two possible mechanisms likely are responsible for the inefficiency of this method in the presence of specific organic molecules; 1) incomplete reduction of Fe(III) bound to organic molecules, or 2) competition between the OM and ferrozine for the available iron. We address these possibilities separately by varying the experimental conditions. These methodological artifacts may have far reaching implications due to the extensive use of this method. Stookey, L. L., Anal. Chem., 42, 779 (1970). Viollier, E., et al., Applied Geochem., 15, 785 (2000).

  3. Semiconductor laser using multimode interference principle

    NASA Astrophysics Data System (ADS)

    Gong, Zisu; Yin, Rui; Ji, Wei; Wu, Chonghao

    2018-01-01

    Multimode interference (MMI) structure is introduced in semiconductor laser used in optical communication system to realize higher power and better temperature tolerance. Using beam propagation method (BPM), Multimode interference laser diode (MMI-LD) is designed and fabricated in InGaAsP/InP based material. As a comparison, conventional semiconductor laser using straight single-mode waveguide is also fabricated in the same wafer. With a low injection current (about 230 mA), the output power of the implemented MMI-LD is up to 2.296 mW which is about four times higher than the output power of the conventional semiconductor laser. The implemented MMI-LD exhibits stable output operating at the wavelength of 1.52 μm and better temperature tolerance when the temperature varies from 283.15 K to 293.15 K.

  4. Interference Effects in Schizophrenic Short-Term Memory

    ERIC Educational Resources Information Center

    Bauman, Edward; Kolisnyk, Eugene

    1976-01-01

    Assesses the effects of input and output interference on schizophrenic recall. Input interference is the interference resulting from the interpolation of items between presentation and recall of the probed item. Output interference is the interference resulting from the interpolation of responses between the presentation and recall of the probed…

  5. Supernova Photometric Lightcurve Classification

    NASA Astrophysics Data System (ADS)

    Zaidi, Tayeb; Narayan, Gautham

    2016-01-01

    This is a preliminary report on photometric supernova classification. We first explore the properties of supernova light curves, and attempt to restructure the unevenly sampled and sparse data from assorted datasets to allow for processing and classification. The data was primarily drawn from the Dark Energy Survey (DES) simulated data, created for the Supernova Photometric Classification Challenge. This poster shows a method for producing a non-parametric representation of the light curve data, and applying a Random Forest classifier algorithm to distinguish between supernovae types. We examine the impact of Principal Component Analysis to reduce the dimensionality of the dataset, for future classification work. The classification code will be used in a stage of the ANTARES pipeline, created for use on the Large Synoptic Survey Telescope alert data and other wide-field surveys. The final figure-of-merit for the DES data in the r band was 60% for binary classification (Type I vs II).Zaidi was supported by the NOAO/KPNO Research Experiences for Undergraduates (REU) Program which is funded by the National Science Foundation Research Experiences for Undergraduates Program (AST-1262829).

  6. Best Merge Region Growing with Integrated Probabilistic Classification for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2011-01-01

    A new method for spectral-spatial classification of hyperspectral images is proposed. The method is based on the integration of probabilistic classification within the hierarchical best merge region growing algorithm. For this purpose, preliminary probabilistic support vector machines classification is performed. Then, hierarchical step-wise optimization algorithm is applied, by iteratively merging regions with the smallest Dissimilarity Criterion (DC). The main novelty of this method consists in defining a DC between regions as a function of region statistical and geometrical features along with classification probabilities. Experimental results are presented on a 200-band AVIRIS image of the Northwestern Indiana s vegetation area and compared with those obtained by recently proposed spectral-spatial classification techniques. The proposed method improves classification accuracies when compared to other classification approaches.

  7. Single classifier, OvO, OvA and RCC multiclass classification method in handheld based smartphone gait identification

    NASA Astrophysics Data System (ADS)

    Raziff, Abdul Rafiez Abdul; Sulaiman, Md Nasir; Mustapha, Norwati; Perumal, Thinagaran

    2017-10-01

    Gait recognition is widely used in many applications. In the application of the gait identification especially in people, the number of classes (people) is many which may comprise to more than 20. Due to the large amount of classes, the usage of single classification mapping (direct classification) may not be suitable as most of the existing algorithms are mostly designed for the binary classification. Furthermore, having many classes in a dataset may result in the possibility of having a high degree of overlapped class boundary. This paper discusses the application of multiclass classifier mappings such as one-vs-all (OvA), one-vs-one (OvO) and random correction code (RCC) on handheld based smartphone gait signal for person identification. The results is then compared with a single J48 decision tree for benchmark. From the result, it can be said that using multiclass classification mapping method thus partially improved the overall accuracy especially on OvO and RCC with width factor more than 4. For OvA, the accuracy result is worse than a single J48 due to a high number of classes.

  8. Classification and mensuration of LACIE segments

    NASA Technical Reports Server (NTRS)

    Heydorn, R. P.; Bizzell, R. M.; Quirein, J. A.; Abotteen, K. M.; Sumner, C. A. (Principal Investigator)

    1979-01-01

    The theory of classification methods and the functional steps in the manual training process used in the three phases of LACIE are discussed. The major problems that arose in using a procedure for manually training a classifier and a method of machine classification are discussed to reveal the motivation that led to a redesign for the third LACIE phase.

  9. Optimized Extraction Method To Remove Humic Acid Interferences from Soil Samples Prior to Microbial Proteome Measurements.

    PubMed

    Qian, Chen; Hettich, Robert L

    2017-07-07

    The microbial composition and their activities in soil environments play a critical role in organic matter transformation and nutrient cycling. Liquid chromatography coupled to high-performance mass spectrometry provides a powerful approach to characterize soil microbiomes; however, the limited microbial biomass and the presence of abundant interferences in soil samples present major challenges to proteome extraction and subsequent MS measurement. To this end, we have designed an experimental method to improve microbial proteome measurement by removing the soil-borne humic substances coextraction from soils. Our approach employs an in situ detergent-based microbial lysis/TCA precipitation coupled to an additional cleanup step involving acidified precipitation and filtering at the peptide level to remove most of the humic acid interferences prior to proteolytic peptide measurement. The novelty of this approach is an integration to exploit two different characteristics of humic acids: (1) Humic acids are insoluble in acidic solution but should not be removed at the protein level, as undesirable protein removal may also occur. Rather it is better to leave the humics acids in the samples until the peptide level, at which point the significant differential solubility of humic acids versus peptides at low pH can be exploited very efficiently. (2) Most of the humic acids have larger molecule weights than the peptides. Therefore, filtering a pH 2 to 3 peptide solution with a 10 kDa filter will remove most of the humic acids. This method is easily interfaced with normal proteolytic processing approaches and provides a reliable and straightforward protein extraction method that efficiently removes soil-borne humic substances without inducing proteome sample loss or biasing protein identification in mass spectrometry. In general, this humic acid removal step is universal and can be adopted by any workflow to effectively remove humic acids to avoid them negatively competing

  10. Obtaining tight bounds on higher-order interferences with a 5-path interferometer

    NASA Astrophysics Data System (ADS)

    Kauten, Thomas; Keil, Robert; Kaufmann, Thomas; Pressl, Benedikt; Brukner, Časlav; Weihs, Gregor

    2017-03-01

    Within the established theoretical framework of quantum mechanics, interference always occurs between pairs of paths through an interferometer. Higher order interferences with multiple constituents are excluded by Born’s rule and can only exist in generalized probabilistic theories. Thus, high-precision experiments searching for such higher order interferences are a powerful method to distinguish between quantum mechanics and more general theories. Here, we perform such a test in an optical multi-path interferometer, which avoids crucial systematic errors, has access to the entire phase space and is more stable than previous experiments. Our results are in accordance with quantum mechanics and rule out the existence of higher order interference terms in optical interferometry to an extent that is more than four orders of magnitude smaller than the expected pairwise interference, refining previous bounds by two orders of magnitude.

  11. Investigating the Importance of the Pocket-estimation Method in Pocket-based Approaches: An Illustration Using Pocket-ligand Classification.

    PubMed

    Caumes, Géraldine; Borrel, Alexandre; Abi Hussein, Hiba; Camproux, Anne-Claude; Regad, Leslie

    2017-09-01

    Small molecules interact with their protein target on surface cavities known as binding pockets. Pocket-based approaches are very useful in all of the phases of drug design. Their first step is estimating the binding pocket based on protein structure. The available pocket-estimation methods produce different pockets for the same target. The aim of this work is to investigate the effects of different pocket-estimation methods on the results of pocket-based approaches. We focused on the effect of three pocket-estimation methods on a pocket-ligand (PL) classification. This pocket-based approach is useful for understanding the correspondence between the pocket and ligand spaces and to develop pharmacological profiling models. We found pocket-estimation methods yield different binding pockets in terms of boundaries and properties. These differences are responsible for the variation in the PL classification results that can have an impact on the detected correspondence between pocket and ligand profiles. Thus, we highlighted the importance of the pocket-estimation method choice in pocket-based approaches. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Screening and classification of ceramic powders

    NASA Technical Reports Server (NTRS)

    Miwa, S.

    1983-01-01

    A summary is given of the classification technology of ceramic powders. Advantages and disadvantages of the wet and dry screening and classification methods are discussed. Improvements of wind force screening devices are described.

  13. Rapid classification of heavy metal-exposed freshwater bacteria by infrared spectroscopy coupled with chemometrics using supervised method

    NASA Astrophysics Data System (ADS)

    Gurbanov, Rafig; Gozen, Ayse Gul; Severcan, Feride

    2018-01-01

    Rapid, cost-effective, sensitive and accurate methodologies to classify bacteria are still in the process of development. The major drawbacks of standard microbiological, molecular and immunological techniques call for the possible usage of infrared (IR) spectroscopy based supervised chemometric techniques. Previous applications of IR based chemometric methods have demonstrated outstanding findings in the classification of bacteria. Therefore, we have exploited an IR spectroscopy based chemometrics using supervised method namely Soft Independent Modeling of Class Analogy (SIMCA) technique for the first time to classify heavy metal-exposed bacteria to be used in the selection of suitable bacteria to evaluate their potential for environmental cleanup applications. Herein, we present the powerful differentiation and classification of laboratory strains (Escherichia coli and Staphylococcus aureus) and environmental isolates (Gordonia sp. and Microbacterium oxydans) of bacteria exposed to growth inhibitory concentrations of silver (Ag), cadmium (Cd) and lead (Pb). Our results demonstrated that SIMCA was able to differentiate all heavy metal-exposed and control groups from each other with 95% confidence level. Correct identification of randomly chosen test samples in their corresponding groups and high model distances between the classes were also achieved. We report, for the first time, the success of IR spectroscopy coupled with supervised chemometric technique SIMCA in classification of different bacteria under a given treatment.

  14. Nanoscale surface characterization using laser interference microscopy

    NASA Astrophysics Data System (ADS)

    Ignatyev, Pavel S.; Skrynnik, Andrey A.; Melnik, Yury A.

    2018-03-01

    Nanoscale surface characterization is one of the most significant parts of modern materials development and application. The modern microscopes are expensive and complicated tools, and its use for industrial tasks is limited due to laborious sample preparation, measurement procedures, and low operation speed. The laser modulation interference microscopy method (MIM) for real-time quantitative and qualitative analysis of glass, metals, ceramics, and various coatings has a spatial resolution of 0.1 nm for vertical and up to 100 nm for lateral. It is proposed as an alternative to traditional scanning electron microscopy (SEM) and atomic force microscopy (AFM) methods. It is demonstrated that in the cases of roughness metrology for super smooth (Ra >1 nm) surfaces the application of a laser interference microscopy techniques is more optimal than conventional SEM and AFM. The comparison of semiconductor test structure for lateral dimensions measurements obtained with SEM and AFM and white light interferometer also demonstrates the advantages of MIM technique.

  15. Suppression of AC railway power-line interference in ECG signals recorded by public access defibrillators

    PubMed Central

    Dotsinsky, Ivan

    2005-01-01

    Background Public access defibrillators (PADs) are now available for more efficient and rapid treatment of out-of-hospital sudden cardiac arrest. PADs are used normally by untrained people on the streets and in sports centers, airports, and other public areas. Therefore, automated detection of ventricular fibrillation, or its exclusion, is of high importance. A special case exists at railway stations, where electric power-line frequency interference is significant. Many countries, especially in Europe, use 16.7 Hz AC power, which introduces high level frequency-varying interference that may compromise fibrillation detection. Method Moving signal averaging is often used for 50/60 Hz interference suppression if its effect on the ECG spectrum has little importance (no morphological analysis is performed). This approach may be also applied to the railway situation, if the interference frequency is continuously detected so as to synchronize the analog-to-digital conversion (ADC) for introducing variable inter-sample intervals. A better solution consists of rated ADC, software frequency measuring, internal irregular re-sampling according to the interference frequency, and a moving average over a constant sample number, followed by regular back re-sampling. Results The proposed method leads to a total railway interference cancellation, together with suppression of inherent noise, while the peak amplitudes of some sharp complexes are reduced. This reduction has negligible effect on accurate fibrillation detection. Conclusion The method is developed in the MATLAB environment and represents a useful tool for real time railway interference suppression. PMID:16309558

  16. Sentiment classification technology based on Markov logic networks

    NASA Astrophysics Data System (ADS)

    He, Hui; Li, Zhigang; Yao, Chongchong; Zhang, Weizhe

    2016-07-01

    With diverse online media emerging, there is a growing concern of sentiment classification problem. At present, text sentiment classification mainly utilizes supervised machine learning methods, which feature certain domain dependency. On the basis of Markov logic networks (MLNs), this study proposed a cross-domain multi-task text sentiment classification method rooted in transfer learning. Through many-to-one knowledge transfer, labeled text sentiment classification, knowledge was successfully transferred into other domains, and the precision of the sentiment classification analysis in the text tendency domain was improved. The experimental results revealed the following: (1) the model based on a MLN demonstrated higher precision than the single individual learning plan model. (2) Multi-task transfer learning based on Markov logical networks could acquire more knowledge than self-domain learning. The cross-domain text sentiment classification model could significantly improve the precision and efficiency of text sentiment classification.

  17. Multiple Spectral-Spatial Classification Approach for Hyperspectral Data

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Benediktsson, Jon Atli; Chanussot, Jocelyn; Tilton, James C.

    2010-01-01

    A .new multiple classifier approach for spectral-spatial classification of hyperspectral images is proposed. Several classifiers are used independently to classify an image. For every pixel, if all the classifiers have assigned this pixel to the same class, the pixel is kept as a marker, i.e., a seed of the spatial region, with the corresponding class label. We propose to use spectral-spatial classifiers at the preliminary step of the marker selection procedure, each of them combining the results of a pixel-wise classification and a segmentation map. Different segmentation methods based on dissimilar principles lead to different classification results. Furthermore, a minimum spanning forest is built, where each tree is rooted on a classification -driven marker and forms a region in the spectral -spatial classification: map. Experimental results are presented for two hyperspectral airborne images. The proposed method significantly improves classification accuracies, when compared to previously proposed classification techniques.

  18. Speech-Message Extraction from Interference Introduced by External Distributed Sources

    NASA Astrophysics Data System (ADS)

    Kanakov, V. A.; Mironov, N. A.

    2017-08-01

    The problem of this study involves the extraction of a speech signal originating from a certain spatial point and calculation of the intelligibility of the extracted voice message. It is solved by the method of decreasing the influence of interference from the speech-message sources on the extracted signal. This method is based on introducing the time delays, which depend on the spatial coordinates, to the recording channels. Audio records of the voices of eight different people were used as test objects during the studies. It is proved that an increase in the number of microphones improves intelligibility of the speech message which is extracted from interference.

  19. Joint Concept Correlation and Feature-Concept Relevance Learning for Multilabel Classification.

    PubMed

    Zhao, Xiaowei; Ma, Zhigang; Li, Zhi; Li, Zhihui

    2018-02-01

    In recent years, multilabel classification has attracted significant attention in multimedia annotation. However, most of the multilabel classification methods focus only on the inherent correlations existing among multiple labels and concepts and ignore the relevance between features and the target concepts. To obtain more robust multilabel classification results, we propose a new multilabel classification method aiming to capture the correlations among multiple concepts by leveraging hypergraph that is proved to be beneficial for relational learning. Moreover, we consider mining feature-concept relevance, which is often overlooked by many multilabel learning algorithms. To better show the feature-concept relevance, we impose a sparsity constraint on the proposed method. We compare the proposed method with several other multilabel classification methods and evaluate the classification performance by mean average precision on several data sets. The experimental results show that the proposed method outperforms the state-of-the-art methods.

  20. Residual interference assessment in adaptive wall wind tunnels

    NASA Technical Reports Server (NTRS)

    Murthy, A. V.

    1989-01-01

    A two-variable method is presented which is suitable for on-line calculation of residual interference in airfoil testing in the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3-M TCT). The method applies the Cauchy's integral formula to the closed contour formed by the contoured top and bottom walls, and the upstream and downstream ends. The measured top and bottom wall pressures and position are used to calculate the correction to the test Mach number and the airfoil angle of attack. Application to specific data obtained in the 0.3-M TCT adaptive wall test section demonstrates the need to assess residual interference to ensure that the desired level of wall streamlining is achieved. A FORTRAN computer program was developed for on-line calculation of the residual corrections during airfoil tests in the 0.3-M TCT.

  1. A psychophysical imaging method evidencing auditory cue extraction during speech perception: a group analysis of auditory classification images.

    PubMed

    Varnet, Léo; Knoblauch, Kenneth; Serniclaes, Willy; Meunier, Fanny; Hoen, Michel

    2015-01-01

    Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique that allows experimenters to estimate the relative importance of time-frequency regions in categorizing natural speech utterances in noise. Importantly, this technique enables the testing of hypotheses on the listening strategies of participants at the group level. We exemplify this approach by identifying the acoustic cues involved in da/ga categorization with two phonetic contexts, Al- or Ar-. The application of Auditory Classification Images to our group of 16 participants revealed significant critical regions on the second and third formant onsets, as predicted by the literature, as well as an unexpected temporal cue on the first formant. Finally, through a cluster-based nonparametric test, we demonstrate that this method is sufficiently sensitive to detect fine modifications of the classification strategies between different utterances of the same phoneme.

  2. Spatial-spectral blood cell classification with microscopic hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Ran, Qiong; Chang, Lan; Li, Wei; Xu, Xiaofeng

    2017-10-01

    Microscopic hyperspectral images provide a new way for blood cell examination. The hyperspectral imagery can greatly facilitate the classification of different blood cells. In this paper, the microscopic hyperspectral images are acquired by connecting the microscope and the hyperspectral imager, and then tested for blood cell classification. For combined use of the spectral and spatial information provided by hyperspectral images, a spatial-spectral classification method is improved from the classical extreme learning machine (ELM) by integrating spatial context into the image classification task with Markov random field (MRF) model. Comparisons are done among ELM, ELM-MRF, support vector machines(SVM) and SVMMRF methods. Results show the spatial-spectral classification methods(ELM-MRF, SVM-MRF) perform better than pixel-based methods(ELM, SVM), and the proposed ELM-MRF has higher precision and show more accurate location of cells.

  3. Coplanar three-beam interference and phase edge dislocations

    NASA Astrophysics Data System (ADS)

    Patorski, Krzysztof; SłuŻewski, Łukasz; Trusiak, Maciej; Pokorski, Krzysztof

    2016-12-01

    We present a comprehensive analysis of grating three-beam interference to discover a broad range of the ratio of amplitudes A of +/-1 diffraction orders and the zero order amplitude C providing phase edge dislocations. We derive a condition A/C > 0.5 for the occurrence of phase edge dislocations in three-beam interference self-image planes. In the boundary case A/C = 0.5 singularity conditions are met in those planes (once per interference field period), but the zero amplitude condition is not accompanied by an abrupt phase change. For A/C > 0.5 two adjacent singularities in a single field period show opposite sign topological charges. The occurrence of edge dislocations for selected values of A/C was verified by processing fork fringes obtained by introducing the fourth beam in the plane perpendicular to the one containing three coplanar diffraction orders. Two fork pattern processing methods are described, 2D CWT (two-dimensional continuous wavelet transform) and 2D spatial differentiation.

  4. Geospatial Method for Computing Supplemental Multi-Decadal U.S. Coastal Land-Use and Land-Cover Classification Products, Using Landsat Data and C-CAP Products

    NASA Technical Reports Server (NTRS)

    Spruce, J. P.; Smoot, James; Ellis, Jean; Hilbert, Kent; Swann, Roberta

    2012-01-01

    This paper discusses the development and implementation of a geospatial data processing method and multi-decadal Landsat time series for computing general coastal U.S. land-use and land-cover (LULC) classifications and change products consisting of seven classes (water, barren, upland herbaceous, non-woody wetland, woody upland, woody wetland, and urban). Use of this approach extends the observational period of the NOAA-generated Coastal Change and Analysis Program (C-CAP) products by almost two decades, assuming the availability of one cloud free Landsat scene from any season for each targeted year. The Mobile Bay region in Alabama was used as a study area to develop, demonstrate, and validate the method that was applied to derive LULC products for nine dates at approximate five year intervals across a 34-year time span, using single dates of data for each classification in which forests were either leaf-on, leaf-off, or mixed senescent conditions. Classifications were computed and refined using decision rules in conjunction with unsupervised classification of Landsat data and C-CAP value-added products. Each classification's overall accuracy was assessed by comparing stratified random locations to available reference data, including higher spatial resolution satellite and aerial imagery, field survey data, and raw Landsat RGBs. Overall classification accuracies ranged from 83 to 91% with overall Kappa statistics ranging from 0.78 to 0.89. The accuracies are comparable to those from similar, generalized LULC products derived from C-CAP data. The Landsat MSS-based LULC product accuracies are similar to those from Landsat TM or ETM+ data. Accurate classifications were computed for all nine dates, yielding effective results regardless of season. This classification method yielded products that were used to compute LULC change products via additive GIS overlay techniques.

  5. An Ensemble Multilabel Classification for Disease Risk Prediction

    PubMed Central

    Liu, Wei; Zhao, Hongling; Zhang, Chaoyang

    2017-01-01

    It is important to identify and prevent disease risk as early as possible through regular physical examinations. We formulate the disease risk prediction into a multilabel classification problem. A novel Ensemble Label Power-set Pruned datasets Joint Decomposition (ELPPJD) method is proposed in this work. First, we transform the multilabel classification into a multiclass classification. Then, we propose the pruned datasets and joint decomposition methods to deal with the imbalance learning problem. Two strategies size balanced (SB) and label similarity (LS) are designed to decompose the training dataset. In the experiments, the dataset is from the real physical examination records. We contrast the performance of the ELPPJD method with two different decomposition strategies. Moreover, the comparison between ELPPJD and the classic multilabel classification methods RAkEL and HOMER is carried out. The experimental results show that the ELPPJD method with label similarity strategy has outstanding performance. PMID:29065647

  6. Application of FT-IR Classification Method in Silica-Plant Extracts Composites Quality Testing

    NASA Astrophysics Data System (ADS)

    Bicu, A.; Drumea, V.; Mihaiescu, D. E.; Purcareanu, B.; Florea, M. A.; Trică, B.; Vasilievici, G.; Draga, S.; Buse, E.; Olariu, L.

    2018-06-01

    Our present work is concerned with the validation and quality testing efforts of mesoporous silica - plant extracts composites, in order to sustain the standardization process of plant-based pharmaceutical products. The synthesis of the silica support were performed by using a TEOS based synthetic route and CTAB as a template, at room temperature and normal pressure. The silica support was analyzed by advanced characterization methods (SEM, TEM, BET, DLS and FT-IR), and loaded with Calendula officinalis and Salvia officinalis standardized extracts. Further desorption studies were performed in order to prove the sustained release properties of the final materials. Intermediate and final product identification was performed by a FT-IR classification method, using the MID-range of the IR spectra, and statistical representative samples from repetitive synthetic stages. The obtained results recommend this analytical method as a fast and cost effective alternative to the classic identification methods.

  7. CCSDS - SFCG Efficient Modulation Methods Study at NASA/JPL - Phase 4: Interference Susceptibility

    NASA Technical Reports Server (NTRS)

    Martin, W.; Yan, T. Y.; Gray, A.; Lee, D.

    1999-01-01

    Susceptibility to two types of interfering signals was requested by the SFCG: a pure carrier (single frequency tone)and wide-band RFI (characteristics unspecified). Selecting a broad-band interfering signal is diffuclt because it should represent the types of interference to be found in the space science service bands.

  8. A Full-Core Resonance Self-Shielding Method Using a Continuous-Energy Quasi–One-Dimensional Slowing-Down Solution that Accounts for Temperature-Dependent Fuel Subregions and Resonance Interference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yuxuan; Martin, William; Williams, Mark

    In this paper, a correction-based resonance self-shielding method is developed that allows annular subdivision of the fuel rod. The method performs the conventional iteration of the embedded self-shielding method (ESSM) without subdivision of the fuel to capture the interpin shielding effect. The resultant self-shielded cross sections are modified by correction factors incorporating the intrapin effects of radial variation of the shielded cross section, radial temperature distribution, and resonance interference. A quasi–one-dimensional slowing-down equation is developed to calculate such correction factors. The method is implemented in the DeCART code and compared with the conventional ESSM and subgroup method with benchmark MCNPmore » results. The new method yields substantially improved results for both spatially dependent reaction rates and eigenvalues for typical pressurized water reactor pin cell cases with uniform and nonuniform fuel temperature profiles. Finally, the new method is also proved effective in treating assembly heterogeneity and complex material composition such as mixed oxide fuel, where resonance interference is much more intense.« less

  9. Random forests for classification in ecology

    USGS Publications Warehouse

    Cutler, D.R.; Edwards, T.C.; Beard, K.H.; Cutler, A.; Hess, K.T.; Gibson, J.; Lawler, J.J.

    2007-01-01

    Classification procedures are some of the most widely used statistical methods in ecology. Random forests (RF) is a new and powerful statistical classifier that is well established in other disciplines but is relatively unknown in ecology. Advantages of RF compared to other statistical classifiers include (1) very high classification accuracy; (2) a novel method of determining variable importance; (3) ability to model complex interactions among predictor variables; (4) flexibility to perform several types of statistical data analysis, including regression, classification, survival analysis, and unsupervised learning; and (5) an algorithm for imputing missing values. We compared the accuracies of RF and four other commonly used statistical classifiers using data on invasive plant species presence in Lava Beds National Monument, California, USA, rare lichen species presence in the Pacific Northwest, USA, and nest sites for cavity nesting birds in the Uinta Mountains, Utah, USA. We observed high classification accuracy in all applications as measured by cross-validation and, in the case of the lichen data, by independent test data, when comparing RF to other common classification methods. We also observed that the variables that RF identified as most important for classifying invasive plant species coincided with expectations based on the literature. ?? 2007 by the Ecological Society of America.

  10. Solid phase excitation-emission fluorescence method for the classification of complex substances: Cortex Phellodendri and other traditional Chinese medicines as examples.

    PubMed

    Gu, Yao; Ni, Yongnian; Kokot, Serge

    2012-09-13

    A novel, simple and direct fluorescence method for analysis of complex substances and their potential substitutes has been researched and developed. Measurements involved excitation and emission (EEM) fluorescence spectra of powdered, complex, medicinal herbs, Cortex Phellodendri Chinensis (CPC) and the similar Cortex Phellodendri Amurensis (CPA); these substances were compared and discriminated from each other and the potentially adulterated samples (Caulis mahoniae (CM) and David poplar bark (DPB)). Different chemometrics methods were applied for resolution of the complex spectra, and the excitation spectra were found to be the most informative; only the rank-ordering PROMETHEE method was able to classify the samples with single ingredients (CPA, CPC, CM) or those with binary mixtures (CPA/CPC, CPA/CM, CPC/CM). Interestingly, it was essential to use the geometrical analysis for interactive aid (GAIA) display for a full understanding of the classification results. However, these two methods, like the other chemometrics models, were unable to classify composite spectral matrices consisting of data from samples of single ingredients and binary mixtures; this suggested that the excitation spectra of the different samples were very similar. However, the method is useful for classification of single-ingredient samples and, separately, their binary mixtures; it may also be applied for similar classification work with other complex substances.

  11. Semantic classification of business images

    NASA Astrophysics Data System (ADS)

    Erol, Berna; Hull, Jonathan J.

    2006-01-01

    Digital cameras are becoming increasingly common for capturing information in business settings. In this paper, we describe a novel method for classifying images into the following semantic classes: document, whiteboard, business card, slide, and regular images. Our method is based on combining low-level image features, such as text color, layout, and handwriting features with high-level OCR output analysis. Several Support Vector Machine Classifiers are combined for multi-class classification of input images. The system yields 95% accuracy in classification.

  12. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods

    ERIC Educational Resources Information Center

    Liu, Boquan; Polce, Evan; Sprott, Julien C.; Jiang, Jack J.

    2018-01-01

    Purpose: The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Study Design: Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100…

  13. Feature-Free Activity Classification of Inertial Sensor Data With Machine Vision Techniques: Method, Development, and Evaluation

    PubMed Central

    O'Reilly, Martin; Whelan, Darragh; Caulfield, Brian; Ward, Tomas E

    2017-01-01

    Background Inertial sensors are one of the most commonly used sources of data for human activity recognition (HAR) and exercise detection (ED) tasks. The time series produced by these sensors are generally analyzed through numerical methods. Machine learning techniques such as random forests or support vector machines are popular in this field for classification efforts, but they need to be supported through the isolation of a potentially large number of additionally crafted features derived from the raw data. This feature preprocessing step can involve nontrivial digital signal processing (DSP) techniques. However, in many cases, the researchers interested in this type of activity recognition problems do not possess the necessary technical background for this feature-set development. Objective The study aimed to present a novel application of established machine vision methods to provide interested researchers with an easier entry path into the HAR and ED fields. This can be achieved by removing the need for deep DSP skills through the use of transfer learning. This can be done by using a pretrained convolutional neural network (CNN) developed for machine vision purposes for exercise classification effort. The new method should simply require researchers to generate plots of the signals that they would like to build classifiers with, store them as images, and then place them in folders according to their training label before retraining the network. Methods We applied a CNN, an established machine vision technique, to the task of ED. Tensorflow, a high-level framework for machine learning, was used to facilitate infrastructure needs. Simple time series plots generated directly from accelerometer and gyroscope signals are used to retrain an openly available neural network (Inception), originally developed for machine vision tasks. Data from 82 healthy volunteers, performing 5 different exercises while wearing a lumbar-worn inertial measurement unit (IMU), was

  14. An evaluation of deficits in semantic cueing and proactive and retroactive interference as early features of Alzheimer's disease.

    PubMed

    Crocco, Elizabeth; Curiel, Rosie E; Acevedo, Amarilis; Czaja, Sara J; Loewenstein, David A

    2014-09-01

    To determine the degree to which susceptibility to different types of semantic interference may reflect the initial manifestations of early Alzheimer's disease (AD) beyond the effects of global memory impairment. Normal elderly (NE) subjects (n = 47), subjects with amnestic mild cognitive impairment (aMCI; n = 34), and subjects with probable AD (n = 40) were evaluated by using a unique cued recall paradigm that allowed for evaluation of both proactive and retroactive interference effects while controlling for global memory impairment (i.e., Loewenstein-Acevedo Scales of Semantic Interference and Learning [LASSI-L] procedure). Controlling for overall memory impairment, aMCI subjects had much greater proactive and retroactive interference effects than NE subjects. LASSI-L indices of learning by using cued recall revealed high levels of sensitivity and specificity, with an overall correct classification rate of 90%. These measures provided better discrimination than traditional neuropsychological measures of memory function. The LASSI-L paradigm is unique and unlike other assessments of memory in that items posed for cued recall are explicitly presented, and semantic interference and cueing effects can be assessed while controlling for initial level of memory impairment. This is a powerful procedure that allows the participant to serve as his or her own control. The high levels of discrimination between subjects with aMCI and normal cognition that exceeded traditional neuropsychological measures makes the LASSI-L worthy of further research in the detection of early AD. Copyright © 2014 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  15. Learning classification models with soft-label information.

    PubMed

    Nguyen, Quang; Valizadegan, Hamed; Hauskrecht, Milos

    2014-01-01

    Learning of classification models in medicine often relies on data labeled by a human expert. Since labeling of clinical data may be time-consuming, finding ways of alleviating the labeling costs is critical for our ability to automatically learn such models. In this paper we propose a new machine learning approach that is able to learn improved binary classification models more efficiently by refining the binary class information in the training phase with soft labels that reflect how strongly the human expert feels about the original class labels. Two types of methods that can learn improved binary classification models from soft labels are proposed. The first relies on probabilistic/numeric labels, the other on ordinal categorical labels. We study and demonstrate the benefits of these methods for learning an alerting model for heparin induced thrombocytopenia. The experiments are conducted on the data of 377 patient instances labeled by three different human experts. The methods are compared using the area under the receiver operating characteristic curve (AUC) score. Our AUC results show that the new approach is capable of learning classification models more efficiently compared to traditional learning methods. The improvement in AUC is most remarkable when the number of examples we learn from is small. A new classification learning framework that lets us learn from auxiliary soft-label information provided by a human expert is a promising new direction for learning classification models from expert labels, reducing the time and cost needed to label data.

  16. Development of a method of exposed characteristic points in activity pattern for rat behaviour classification

    NASA Astrophysics Data System (ADS)

    Stefko, Kamil; Bukowski, Tomasz; Urbański, Michał

    2012-03-01

    A fast method for visual inspection and classification of massive locomotor activity data registered from laboratory rats is presented. Positions in the home cage of one hundred rats have been constantly recorded during 90 day period using photodiodes and beam crossing method with use of custom build system. Direct inspection and comparison of classic form of actograms did not bring information for fast and easy recognition of anomalies in daily behavioural cycle. A method of obtaining fast and easy to compare locomotor activity pattern is presented. The key point of proposed method is exposition of characteristic points in the activity diagram. About 9000 actograms were inspected and classified for investigation with use of ANOVA.

  17. Retinal vasculature classification using novel multifractal features

    NASA Astrophysics Data System (ADS)

    Ding, Y.; Ward, W. O. C.; Duan, Jinming; Auer, D. P.; Gowland, Penny; Bai, L.

    2015-11-01

    Retinal blood vessels have been implicated in a large number of diseases including diabetic retinopathy and cardiovascular diseases, which cause damages to retinal blood vessels. The availability of retinal vessel imaging provides an excellent opportunity for monitoring and diagnosis of retinal diseases, and automatic analysis of retinal vessels will help with the processes. However, state of the art vascular analysis methods such as counting the number of branches or measuring the curvature and diameter of individual vessels are unsuitable for the microvasculature. There has been published research using fractal analysis to calculate fractal dimensions of retinal blood vessels, but so far there has been no systematic research extracting discriminant features from retinal vessels for classifications. This paper introduces new methods for feature extraction from multifractal spectra of retinal vessels for classification. Two publicly available retinal vascular image databases are used for the experiments, and the proposed methods have produced accuracies of 85.5% and 77% for classification of healthy and diabetic retinal vasculatures. Experiments show that classification with multiple fractal features produces better rates compared with methods using a single fractal dimension value. In addition to this, experiments also show that classification accuracy can be affected by the accuracy of vessel segmentation algorithms.

  18. Classification Techniques for Digital Map Compression

    DTIC Science & Technology

    1989-03-01

    classification improved the performance of the K-means classification algorithm resulting in a compression of 8.06:1 with Lempel - Ziv coding. Run-length coding... compression performance are run-length coding [2], [8] and Lempel - Ziv coding 110], [11]. These techniques are chosen because they are most efficient when...investigated. After the classification, some standard file compression methods, such as Lempel - Ziv and run-length encoding were applied to the

  19. Optimal Methods for Classification of Digitally Modulated Signals

    DTIC Science & Technology

    2013-03-01

    of using a ratio of likelihood functions, the proposed approach uses the Kullback - Leibler (KL) divergence. KL...58 List of Acronyms ALRT Average LRT BPSK Binary Shift Keying BPSK-SS BPSK Spread Spectrum or CDMA DKL Kullback - Leibler Information Divergence...blind demodulation for develop classification algorithms for wider set of signals types. Two methodologies were used : Likelihood Ratio Test

  20. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  1. Binaural Interference: Quo Vadis?

    PubMed

    Jerger, James; Silman, Shlomo; Silverman, Carol; Emmer, Michele

    2017-04-01

    The reality of the phenomenon of binaural interference with speech recognition has been debated for two decades. Research has taken one of two avenues; group studies or case reports. In group studies, a sample of the elderly population is tested on speech recognition under three conditions; binaural, monaural right and monaural left. The aim is to determine the percent of the sample in which the expected outcome (binaural score-better-than-either-monaural score) is reversed (i.e., one of the monaural scores is better than the binaural score). This outcome has been commonly used to define binaural interference. The object of group studies is to answer the "how many" question, what is the prevalence of binaural interference in the sample. In case reports the binaural interference conclusion suggested by the speech recognition tests is not accepted until it has been corroborated by other independent diagnostic audiological measures. The aim is to attempt to determine the basis for the findings, to answer the "why" question. This article is at once tutorial, editorial and a case report. We argue that it is time to accept the reality of the phenomenon of binaural interference, to eschew group statistical approaches in search of an answer to the "how many" question, and to focus on individual case reports in search of an answer to the "why" question. American Academy of Audiology.

  2. Interference of medical contrast media on laboratory testing.

    PubMed

    Lippi, Giuseppe; Daves, Massimo; Mattiuzzi, Camilla

    2014-01-01

    The use of contrast media such as organic iodine molecules and gadolinium contrast agents is commonplace in diagnostic imaging. Although there is widespread perception that side effects and drug interactions may be the leading problems caused by these compounds, various degrees of interference with some laboratory tests have been clearly demonstrated. Overall, the described interference for iodinate contrast media include inappropriate gel barrier formation in blood tubes, the appearance of abnormal peaks in capillary zone electrophoresis of serum proteins, and a positive bias in assessment of cardiac troponin I with one immunoassay. The interference for gadolinium contrast agents include negative bias in calcium assessment with ortho-cresolphthalein colorimetric assays and occasional positive bias using some Arsenazo reagents, negative bias in measurement of angiotensin converting enzyme (ACE) and zinc (colorimetric assay), as well as positive bias in creatinine (Jaffe reaction), total iron binding capacity (TIBC, ferrozine method), magnesium (calmagite reagent) and selenium (mass spectrometry) measurement. Interference has also been reported in assessment of serum indices, pulse oximetry and methaemoglobin in samples of patients receiving Patent Blue V. Under several circumstances the interference was absent from manufacturer-supplied information and limited to certain type of reagents and/or analytes, so that local verification may be advisable to establish whether or not the test in use may be biased. Since the elimination half-life of these compounds is typically lower than 2 h, blood collection after this period may be a safer alternative in patients who have received contrast media for diagnostic purposes.

  3. Interference of medical contrast media on laboratory testing

    PubMed Central

    Lippi, Giuseppe; Daves, Massimo; Mattiuzzi, Camilla

    2014-01-01

    The use of contrast media such as organic iodine molecules and gadolinium contrast agents is commonplace in diagnostic imaging. Although there is widespread perception that side effects and drug interactions may be the leading problems caused by these compounds, various degrees of interference with some laboratory tests have been clearly demonstrated. Overall, the described interference for iodinate contrast media include inappropriate gel barrier formation in blood tubes, the appearance of abnormal peaks in capillary zone electrophoresis of serum proteins, and a positive bias in assessment of cardiac troponin I with one immunoassay. The interference for gadolinium contrast agents include negative bias in calcium assessment with ortho-cresolphthalein colorimetric assays and occasional positive bias using some Arsenazo reagents, negative bias in measurement of angiotensin converting enzyme (ACE) and zinc (colorimetric assay), as well as positive bias in creatinine (Jaffe reaction), total iron binding capacity (TIBC, ferrozine method), magnesium (calmagite reagent) and selenium (mass spectrometry) measurement. Interference has also been reported in assessment of serum indices, pulse oximetry and methaemoglobin in samples of patients receiving Patent Blue V. Under several circumstances the interference was absent from manufacturer-supplied information and limited to certain type of reagents and/or analytes, so that local verification may be advisable to establish whether or not the test in use may be biased. Since the elimination half-life of these compounds is typically lower than 2 h, blood collection after this period may be a safer alternative in patients who have received contrast media for diagnostic purposes. PMID:24627717

  4. An experimental study of wall adaptation and interference assessment using Cauchy integral formula

    NASA Technical Reports Server (NTRS)

    Murthy, A. V.

    1991-01-01

    This paper summarizes the results of an experimental study of combined wall adaptation and residual interference assessment using the Cauchy integral formula. The experiments were conducted on a supercritical airfoil model in the Langley 0.3-m Transonic Cryogenic Tunnel solid flexible wall test section. The ratio of model chord to test section height was about 0.7. The method worked satisfactorily in reducing the blockage interference and demonstrated the primary requirement for correcting for the blockage effects at high model incidences to correctly determine high lift characteristics. The studies show that the method has potential for reducing the residual interference to considerably low levels. However, corrections to blockage and upwash velocities gradients may still be required for the final adapted wall shapes.

  5. Measurement of the configuration of a concave surface by the interference of reflected light

    NASA Technical Reports Server (NTRS)

    Kumazawa, T.; Sakamoto, T.; Shida, S.

    1985-01-01

    A method whereby a concave surface is irradiated with coherent light and the resulting interference fringes yield information on the concave surface is described. This method can be applied to a surface which satisfies the following conditions: (1) the concave face has a mirror surface; (2) the profile of the face is expressed by a mathematical function with a point of inflection. In this interferometry, multilight waves reflected from the concave surface interfere and make fringes wherever the reflected light propagates. Interference fringe orders. Photographs of the fringe patterns for a uniformly loaded thin silicon plate clamped at the edge are shown experimentally. The experimental and the theoretical values of the maximum optical path difference show good agreement. This simple method can be applied to obtain accurate information on concave surfaces.

  6. Naïve Bayes classification in R.

    PubMed

    Zhang, Zhongheng

    2016-06-01

    Naïve Bayes classification is a kind of simple probabilistic classification methods based on Bayes' theorem with the assumption of independence between features. The model is trained on training dataset to make predictions by predict() function. This article introduces two functions naiveBayes() and train() for the performance of Naïve Bayes classification.

  7. Shear wave speed recovery using moving interference patterns obtained in sonoelastography experiments.

    PubMed

    McLaughlin, Joyce; Renzi, Daniel; Parker, Kevin; Wu, Zhe

    2007-04-01

    Two new experiments were created to characterize the elasticity of soft tissue using sonoelastography. In both experiments the spectral variance image displayed on a GE LOGIC 700 ultrasound machine shows a moving interference pattern that travels at a very small fraction of the shear wave speed. The goal of this paper is to devise and test algorithms to calculate the speed of the moving interference pattern using the arrival times of these same patterns. A geometric optics expansion is used to obtain Eikonal equations relating the moving interference pattern arrival times to the moving interference pattern speed and then to the shear wave speed. A cross-correlation procedure is employed to find the arrival times; and an inverse Eikonal solver called the level curve method computes the speed of the interference pattern. The algorithm is tested on data from a phantom experiment performed at the University of Rochester Center for Biomedical Ultrasound.

  8. Radiographic classifications in Perthes disease

    PubMed Central

    Huhnstock, Stefan; Svenningsen, Svein; Merckoll, Else; Catterall, Anthony; Terjesen, Terje; Wiig, Ola

    2017-01-01

    Background and purpose Different radiographic classifications have been proposed for prediction of outcome in Perthes disease. We assessed whether the modified lateral pillar classification would provide more reliable interobserver agreement and prognostic value compared with the original lateral pillar classification and the Catterall classification. Patients and methods 42 patients (38 boys) with Perthes disease were included in the interobserver study. Their mean age at diagnosis was 6.5 (3–11) years. 5 observers classified the radiographs in 2 separate sessions according to the Catterall classification, the original and the modified lateral pillar classifications. Interobserver agreement was analysed using weighted kappa statistics. We assessed the associations between the classifications and femoral head sphericity at 5-year follow-up in 37 non-operatively treated patients in a crosstable analysis (Gamma statistics for ordinal variables, γ). Results The original lateral pillar and Catterall classifications showed moderate interobserver agreement (kappa 0.49 and 0.43, respectively) while the modified lateral pillar classification had fair agreement (kappa 0.40). The original lateral pillar classification was strongly associated with the 5-year radiographic outcome, with a mean γ correlation coefficient of 0.75 (95% CI: 0.61–0.95) among the 5 observers. The modified lateral pillar and Catterall classifications showed moderate associations (mean γ correlation coefficient 0.55 [95% CI: 0.38–0.66] and 0.64 [95% CI: 0.57–0.72], respectively). Interpretation The Catterall classification and the original lateral pillar classification had sufficient interobserver agreement and association to late radiographic outcome to be suitable for clinical use. Adding the borderline B/C group did not increase the interobserver agreement or prognostic value of the original lateral pillar classification. PMID:28613966

  9. 49 CFR 193.2633 - Interference currents.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: FEDERAL SAFETY STANDARDS Maintenance § 193.2633 Interference currents. (a) Each component that is subject to electrical current interference must be protected by a continuing program to minimize the... 49 Transportation 3 2012-10-01 2012-10-01 false Interference currents. 193.2633 Section 193.2633...

  10. 49 CFR 193.2633 - Interference currents.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: FEDERAL SAFETY STANDARDS Maintenance § 193.2633 Interference currents. (a) Each component that is subject to electrical current interference must be protected by a continuing program to minimize the... 49 Transportation 3 2014-10-01 2014-10-01 false Interference currents. 193.2633 Section 193.2633...

  11. 49 CFR 193.2633 - Interference currents.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...: FEDERAL SAFETY STANDARDS Maintenance § 193.2633 Interference currents. (a) Each component that is subject to electrical current interference must be protected by a continuing program to minimize the... 49 Transportation 3 2013-10-01 2013-10-01 false Interference currents. 193.2633 Section 193.2633...

  12. 49 CFR 193.2633 - Interference currents.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...: FEDERAL SAFETY STANDARDS Maintenance § 193.2633 Interference currents. (a) Each component that is subject to electrical current interference must be protected by a continuing program to minimize the... 49 Transportation 3 2011-10-01 2011-10-01 false Interference currents. 193.2633 Section 193.2633...

  13. Comparison of classification algorithms for various methods of preprocessing radar images of the MSTAR base

    NASA Astrophysics Data System (ADS)

    Borodinov, A. A.; Myasnikov, V. V.

    2018-04-01

    The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.

  14. Physical and Mathematical Methods for Removing Organic Interference from Optical Isotope Measurements

    NASA Astrophysics Data System (ADS)

    Hsiao, G.; Chappellet-Volini, L.; Vu, D.

    2012-12-01

    Portable high precision isotope analyzers using CRDS technology have greatly increased the use of stable isotopes in hydrological, oceanographic, and ecological studies over the past five years. However studies of some water samples yielded incorrect isotopic values indicating some form of spectroscopic interference. Subsequent work has shown that waters derived from some plants containing interfering alcohols but meteoric waters are not affected. The initial approach to handling such samples was to use spectroscopic anomalies to identify and flag affected samples for later analysis by non-optical methods. This presentation will examine the approaches developed within the past year to allow for accurate analysis of such samples by optical methods. The first approach uses an advanced spectroscopic model to identify and quantify alcohols present in the sample. The alcohol signal is incorporated into the overall fit of the measure spectra to calculate the concentration of the individual isotopes. It was found that the δ18O value could be calculated with high accuracy, the result for the δ2H value was sufficient for many applications. The second approach uses physical treatment of the sample to break down the organic molecules into non-interfering species. The liquid sample is injected into a flash vaporizer then the vapor travels through a cartridge for physical treatment prior to analysis by CRDS. Inside the cartridge the organic molecules undergo oxidation at high temperature in the air carrier gas when exposed to the catalyst. This approach is highly effective for ethanol solutions as high as 5% as well as for the complex mixtures of alcohols found in plants. Comparison of the results of both of these methods will be compared with tertiary techniques such as IRMS where possible.

  15. Landcover Classification Using Deep Fully Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Wang, J.; Li, X.; Zhou, S.; Tang, J.

    2017-12-01

    Land cover classification has always been an essential application in remote sensing. Certain image features are needed for land cover classification whether it is based on pixel or object-based methods. Different from other machine learning methods, deep learning model not only extracts useful information from multiple bands/attributes, but also learns spatial characteristics. In recent years, deep learning methods have been developed rapidly and widely applied in image recognition, semantic understanding, and other application domains. However, there are limited studies applying deep learning methods in land cover classification. In this research, we used fully convolutional networks (FCN) as the deep learning model to classify land covers. The National Land Cover Database (NLCD) within the state of Kansas was used as training dataset and Landsat images were classified using the trained FCN model. We also applied an image segmentation method to improve the original results from the FCN model. In addition, the pros and cons between deep learning and several machine learning methods were compared and explored. Our research indicates: (1) FCN is an effective classification model with an overall accuracy of 75%; (2) image segmentation improves the classification results with better match of spatial patterns; (3) FCN has an excellent ability of learning which can attains higher accuracy and better spatial patterns compared with several machine learning methods.

  16. Self-limiting filters for band-selective interferer rejection or cognitive receiver protection

    DOEpatents

    Nordquist, Christopher; Scott, Sean Michael; Custer, Joyce Olsen; Leonhardt, Darin; Jordan, Tyler Scott; Rodenbeck, Christopher T.; Clem, Paul G.; Hunker, Jeff; Wolfley, Steven L.

    2017-03-07

    The present invention related to self-limiting filters, arrays of such filters, and methods thereof. In particular embodiments, the filters include a metal transition film (e.g., a VO.sub.2 film) capable of undergoing a phase transition that modifies the film's resistivity. Arrays of such filters could allow for band-selective interferer rejection, while permitting transmission of non-interferer signals.

  17. Combining DCQGMP-Based Sparse Decomposition and MPDR Beamformer for Multi-Type Interferences Mitigation for GNSS Receivers.

    PubMed

    Guo, Qiang; Qi, Liangang

    2017-04-10

    In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal.

  18. Combining DCQGMP-Based Sparse Decomposition and MPDR Beamformer for Multi-Type Interferences Mitigation for GNSS Receivers

    PubMed Central

    Guo, Qiang; Qi, Liangang

    2017-01-01

    In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal. PMID:28394290

  19. Nursing Classification Systems

    PubMed Central

    Henry, Suzanne Bakken; Mead, Charles N.

    1997-01-01

    Abstract Our premise is that from the perspective of maximum flexibility of data usage by computer-based record (CPR) systems, existing nursing classification systems are necessary, but not sufficient, for representing important aspects of “what nurses do.” In particular, we have focused our attention on those classification systems that represent nurses' clinical activities through the abstraction of activities into categories of nursing interventions. In this theoretical paper, we argue that taxonomic, combinatorial vocabularies capable of coding atomic-level nursing activities are required to effectively capture in a reproducible and reversible manner the clinical decisions and actions of nurses, and that, without such vocabularies and associated grammars, potentially important clinical process data is lost during the encoding process. Existing nursing intervention classification systems do not fulfill these criteria. As background to our argument, we first present an overview of the content, methods, and evaluation criteria used in previous studies whose focus has been to evaluate the effectiveness of existing coding and classification systems. Next, using the Ingenerf typology of taxonomic vocabularies, we categorize the formal type and structure of three existing nursing intervention classification systems—Nursing Interventions Classification, Omaha System, and Home Health Care Classification. Third, we use records from home care patients to show examples of lossy data transformation, the loss of potentially significant atomic data, resulting from encoding using each of the three systems. Last, we provide an example of the application of a formal representation methodology (conceptual graphs) which we believe could be used as a model to build the required combinatorial, taxonomic vocabulary for representing nursing interventions. PMID:9147341

  20. Direct Evidence of Acetaminophen Interference with Subcutaneous Glucose Sensing in Humans: A Pilot Study

    PubMed Central

    Basu, Ananda; Veettil, Sona; Dyer, Roy; Peyser, Thomas

    2016-01-01

    Abstract Background: Recent advances in accuracy and reliability of continuous glucose monitoring (CGM) devices have focused renewed interest on the use of such technology for therapeutic dosing of insulin without the need for independent confirmatory blood glucose meter measurements. An important issue that remains is the susceptibility of CGM devices to erroneous readings in the presence of common pharmacologic interferences. We report on a new method of assessing CGM sensor error to pharmacologic interferences using the example of oral administration of acetaminophen. Materials and Methods: We examined the responses of several different Food and Drug Administration–approved and commercially available CGM systems (Dexcom [San Diego, CA] Seven® Plus™, Medtronic Diabetes [Northridge, CA] Guardian®, and Dexcom G4® Platinum) to oral acetaminophen in 10 healthy volunteers without diabetes. Microdialysis catheters were placed in the abdominal subcutaneous tissue. Blood and microdialysate samples were collected periodically and analyzed for glucose and acetaminophen concentrations before and after oral ingestion of 1 g of acetaminophen. We compared the response of CGM sensors with the measured acetaminophen concentrations in the blood and interstitial fluid. Results: Although plasma glucose concentrations remained constant at approximately 90 mg/dL (approximately 5 mM) throughout the study, CGM glucose measurements varied between approximately 85 to 400 mg/dL (from approximately 5 to 22 mM) due to interference from the acetaminophen. The temporal profile of CGM interference followed acetaminophen concentrations measured in interstitial fluid (ISF). Conclusions: This is the first direct measurement of ISF concentrations of putative CGM interferences with simultaneous measurements of CGM performance in the presence of the interferences. The observed interference with glucose measurements in the tested CGM devices coincided temporally with appearance of

  1. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  2. Applied Chaos Level Test for Validation of Signal Conditions Underlying Optimal Performance of Voice Classification Methods.

    PubMed

    Liu, Boquan; Polce, Evan; Sprott, Julien C; Jiang, Jack J

    2018-05-17

    The purpose of this study is to introduce a chaos level test to evaluate linear and nonlinear voice type classification method performances under varying signal chaos conditions without subjective impression. Voice signals were constructed with differing degrees of noise to model signal chaos. Within each noise power, 100 Monte Carlo experiments were applied to analyze the output of jitter, shimmer, correlation dimension, and spectrum convergence ratio. The computational output of the 4 classifiers was then plotted against signal chaos level to investigate the performance of these acoustic analysis methods under varying degrees of signal chaos. A diffusive behavior detection-based chaos level test was used to investigate the performances of different voice classification methods. Voice signals were constructed by varying the signal-to-noise ratio to establish differing signal chaos conditions. Chaos level increased sigmoidally with increasing noise power. Jitter and shimmer performed optimally when the chaos level was less than or equal to 0.01, whereas correlation dimension was capable of analyzing signals with chaos levels of less than or equal to 0.0179. Spectrum convergence ratio demonstrated proficiency in analyzing voice signals with all chaos levels investigated in this study. The results of this study corroborate the performance relationships observed in previous studies and, therefore, demonstrate the validity of the validation test method. The presented chaos level validation test could be broadly utilized to evaluate acoustic analysis methods and establish the most appropriate methodology for objective voice analysis in clinical practice.

  3. Multiscale investigation of chemical interference in proteins

    NASA Astrophysics Data System (ADS)

    Samiotakis, Antonios; Homouz, Dirar; Cheung, Margaret S.

    2010-05-01

    We developed a multiscale approach (MultiSCAAL) that integrates the potential of mean force obtained from all-atomistic molecular dynamics simulations with a knowledge-based energy function for coarse-grained molecular simulations in better exploring the energy landscape of a small protein under chemical interference such as chemical denaturation. An excessive amount of water molecules in all-atomistic molecular dynamics simulations often negatively impacts the sampling efficiency of some advanced sampling techniques such as the replica exchange method and it makes the investigation of chemical interferences on protein dynamics difficult. Thus, there is a need to develop an effective strategy that focuses on sampling structural changes in protein conformations rather than solvent molecule fluctuations. In this work, we address this issue by devising a multiscale simulation scheme (MultiSCAAL) that bridges the gap between all-atomistic molecular dynamics simulation and coarse-grained molecular simulation. The two key features of this scheme are the Boltzmann inversion and a protein atomistic reconstruction method we previously developed (SCAAL). Using MultiSCAAL, we were able to enhance the sampling efficiency of proteins solvated by explicit water molecules. Our method has been tested on the folding energy landscape of a small protein Trp-cage with explicit solvent under 8M urea using both the all-atomistic replica exchange molecular dynamics and MultiSCAAL. We compared computational analyses on ensemble conformations of Trp-cage with its available experimental NOE distances. The analysis demonstrated that conformations explored by MultiSCAAL better agree with the ones probed in the experiments because it can effectively capture the changes in side-chain orientations that can flip out of the hydrophobic pocket in the presence of urea and water molecules. In this regard, MultiSCAAL is a promising and effective sampling scheme for investigating chemical interference

  4. Multi-Temporal Classification and Change Detection Using Uav Images

    NASA Astrophysics Data System (ADS)

    Makuti, S.; Nex, F.; Yang, M. Y.

    2018-05-01

    In this paper different methodologies for the classification and change detection of UAV image blocks are explored. UAV is not only the cheapest platform for image acquisition but it is also the easiest platform to operate in repeated data collections over a changing area like a building construction site. Two change detection techniques have been evaluated in this study: the pre-classification and the post-classification algorithms. These methods are based on three main steps: feature extraction, classification and change detection. A set of state of the art features have been used in the tests: colour features (HSV), textural features (GLCM) and 3D geometric features. For classification purposes Conditional Random Field (CRF) has been used: the unary potential was determined using the Random Forest algorithm while the pairwise potential was defined by the fully connected CRF. In the performed tests, different feature configurations and settings have been considered to assess the performance of these methods in such challenging task. Experimental results showed that the post-classification approach outperforms the pre-classification change detection method. This was analysed using the overall accuracy, where by post classification have an accuracy of up to 62.6 % and the pre classification change detection have an accuracy of 46.5 %. These results represent a first useful indication for future works and developments.

  5. Holistic processing of impossible objects: evidence from Garner's speeded-classification task.

    PubMed

    Freud, Erez; Avidan, Galia; Ganel, Tzvi

    2013-12-18

    Holistic processing, the decoding of the global structure of a stimulus while the local parts are not explicitly represented, is a basic characteristic of object perception. The current study was aimed to test whether such a representation could be created even for objects that violate fundamental principles of spatial organization, namely impossible objects. Previous studies argued that these objects cannot be represented holistically in long-term memory because they lack coherent 3D structure. Here, we utilized Garner's speeded classification task to test whether the perception of possible and impossible objects is mediated by similar holistic processing mechanisms. To this end, participants were asked to make speeded classifications of one object dimension while an irrelevant dimension was kept constant (baseline condition) or when this dimension varied (filtering condition). It is well accepted that ignoring the irrelevant dimension is impossible when holistic perception is mandatory, thus the extent of Garner interference in performance between the baseline and filtering conditions serves as an index of holistic processing. Critically, in Experiment 1, similar levels of Garner interference were found for possible and impossible objects implying holistic perception of both object types. Experiment 2 extended these results and demonstrated that even when depth information was explicitly processed, participants were still unable to process one dimension (width/depth) while ignoring the irrelevant dimension (depth/width, respectively). The results of Experiment 3 replicated the basic pattern found in Experiments 1 and 2 using a novel set of object exemplars. In Experiment 4, we used possible and impossible versions of the Penrose triangles in which information about impossibility is embedded in the internal elements of the objects which participant were explicitly asked to judge. As in Experiments 1-3, similar Garner interference was found for possible and

  6. The Wall Interference of a Wind Tunnel of Elliptic Cross Section

    NASA Technical Reports Server (NTRS)

    Tani, Itiro; Sanuki, Matao

    1944-01-01

    The wall interference is obtained for a wind tunnel of elliptic section for the two cases of closed and open working sections. The approximate and exact methods used gave results in practically good agreement. Corresponding to the result given by Glauert for the case of the closed rectangular section, the interference is found to be a minimum for a ratio of minor to major axis of 1:square root of 6 This, however, is true only for the case where the span of the airfoil is small in comparison with the width of the tunnel. For a longer airfoil the favorable ellipse is flatter. In the case of the open working section the circular shape gives the minimum interference.

  7. ISBDD Model for Classification of Hyperspectral Remote Sensing Imagery

    PubMed Central

    Li, Na; Xu, Zhaopeng; Zhao, Huijie; Huang, Xinchen; Drummond, Jane; Wang, Daming

    2018-01-01

    The diverse density (DD) algorithm was proposed to handle the problem of low classification accuracy when training samples contain interference such as mixed pixels. The DD algorithm can learn a feature vector from training bags, which comprise instances (pixels). However, the feature vector learned by the DD algorithm cannot always effectively represent one type of ground cover. To handle this problem, an instance space-based diverse density (ISBDD) model that employs a novel training strategy is proposed in this paper. In the ISBDD model, DD values of each pixel are computed instead of learning a feature vector, and as a result, the pixel can be classified according to its DD values. Airborne hyperspectral data collected by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor and the Push-broom Hyperspectral Imager (PHI) are applied to evaluate the performance of the proposed model. Results show that the overall classification accuracy of ISBDD model on the AVIRIS and PHI images is up to 97.65% and 89.02%, respectively, while the kappa coefficient is up to 0.97 and 0.88, respectively. PMID:29510547

  8. Di-codon Usage for Gene Classification

    NASA Astrophysics Data System (ADS)

    Nguyen, Minh N.; Ma, Jianmin; Fogel, Gary B.; Rajapakse, Jagath C.

    Classification of genes into biologically related groups facilitates inference of their functions. Codon usage bias has been described previously as a potential feature for gene classification. In this paper, we demonstrate that di-codon usage can further improve classification of genes. By using both codon and di-codon features, we achieve near perfect accuracies for the classification of HLA molecules into major classes and sub-classes. The method is illustrated on 1,841 HLA sequences which are classified into two major classes, HLA-I and HLA-II. Major classes are further classified into sub-groups. A binary SVM using di-codon usage patterns achieved 99.95% accuracy in the classification of HLA genes into major HLA classes; and multi-class SVM achieved accuracy rates of 99.82% and 99.03% for sub-class classification of HLA-I and HLA-II genes, respectively. Furthermore, by combining codon and di-codon usages, the prediction accuracies reached 100%, 99.82%, and 99.84% for HLA major class classification, and for sub-class classification of HLA-I and HLA-II genes, respectively.

  9. Activity interference and noise annoyance

    NASA Astrophysics Data System (ADS)

    Hall, F. L.; Taylor, S. M.; Birnie, S. E.

    1985-11-01

    Debate continues over differences in the dose-response functions used to predict the annoyance at different sources of transportation noise. This debate reflects the lack of an accepted model of noise annoyance in residential communities. In this paper a model is proposed which is focussed on activity interference as a central component mediating the relationship between noise exposure and annoyance. This model represents a departure from earlier models in two important respects. First, single event noise levels (e.g., maximum levels, sound exposure level) constitute the noise exposure variables in place of long-term energy equivalent measures (e.g., 24-hour Leq or Ldn). Second, the relationships within the model are expressed as probabilistic rather than deterministic equations. The model has been tested by using acoustical and social survey data collected at 57 sites in the Toronto region exposed to aircraft, road traffic or train noise. Logit analysis was used to estimate two sets of equations. The first predicts the probability of activity interference as a function of event noise level. Four types of interference are included: indoor speech, outdoor speech, difficulty getting to sleep and awakening. The second set predicts the probability of annoyance as a function of the combination of activity interferences. From the first set of equations, it was possible to estimate a function for indoor speech interference only. In this case, the maximum event level was the strongest predictor. The lack of significant results for the other types of interference is explained by the limitations of the data. The same function predicts indoor speech interference for all three sources—road, rail and aircraft noise. The results for the second set of equations show strong relationships between activity interference and the probability of annoyance. Again, the parameters of the logit equations are similar for the three sources. A trial application of the model predicts a higher

  10. Photon statistics as an interference phenomenon.

    PubMed

    Mehringer, Thomas; Mährlein, Simon; von Zanthier, Joachim; Agarwal, Girish S

    2018-05-15

    Interference of light fields, first postulated by Young, is one of the fundamental pillars of physics. Dirac extended this observation to the quantum world by stating that each photon interferes only with itself. A precondition for interference to occur is that no welcher-weg information labels the paths the photon takes; otherwise, the interference vanishes. This remains true, even if two-photon interference is considered, e.g., in the Hong-Ou-Mandel-experiment. Here, the two photons interfere only if they are indistinguishable, e.g., in frequency, momentum, polarization, and time. Less known is the fact that two-photon interference and photon indistinguishability also determine the photon statistics in the overlapping light fields of two independent sources. As a consequence, measuring the photon statistics in the far field of two independent sources reveals the degree of indistinguishability of the emitted photons. In this Letter, we prove this statement in theory using a quantum mechanical treatment. We also demonstrate the outcome experimentally with a simple setup consisting of two statistically independent thermal light sources with adjustable polarizations. We find that the photon statistics vary indeed as a function of the polarization settings, the latter determining the degree of welcher-weg information of the photons emanating from the two sources.

  11. Optical interference with noncoherent states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sagi, Yoav; Firstenberg, Ofer; Fisher, Amnon

    2003-03-01

    We examine a typical two-source optical interference apparatus consisting of two cavities, a beam splitter, and two detectors. We show that field-field interference occurs even when the cavities are not initially in coherent states but rather in other nonclassical states. However, we find that the visibility of the second-order interference, that is, the expectation values of the detectors' readings, changes from 100%, when the cavities are prepared in coherent states, to zero visibility when they are initially in single Fock states. We calculate the fourth-order interference, and for the latter case find that it corresponds to a case where themore » currents oscillate with 100% visibility, but with a random phase for every experiment. Finally, we suggest an experimental realization of the apparatus with nonclassical sources.« less

  12. Automated interference tools of the All-Russian Research Institute for Optical and Physical Measurements

    NASA Astrophysics Data System (ADS)

    Vishnyakov, G. N.; Levin, G. G.; Minaev, V. L.

    2017-09-01

    A review of advanced equipment for automated interference measurements developed at the All-Russian Research Institute for Optical and Physical Measurements is given. Three types of interference microscopes based on the Linnik, Twyman-Green, and Fizeau interferometers with the use of the phase stepping method are presented.

  13. Evaluation of image features and classification methods for Barrett's cancer detection using VLE imaging

    NASA Astrophysics Data System (ADS)

    Klomp, Sander; van der Sommen, Fons; Swager, Anne-Fré; Zinger, Svitlana; Schoon, Erik J.; Curvers, Wouter L.; Bergman, Jacques J.; de With, Peter H. N.

    2017-03-01

    Volumetric Laser Endomicroscopy (VLE) is a promising technique for the detection of early neoplasia in Barrett's Esophagus (BE). VLE generates hundreds of high resolution, grayscale, cross-sectional images of the esophagus. However, at present, classifying these images is a time consuming and cumbersome effort performed by an expert using a clinical prediction model. This paper explores the feasibility of using computer vision techniques to accurately predict the presence of dysplastic tissue in VLE BE images. Our contribution is threefold. First, a benchmarking is performed for widely applied machine learning techniques and feature extraction methods. Second, three new features based on the clinical detection model are proposed, having superior classification accuracy and speed, compared to earlier work. Third, we evaluate automated parameter tuning by applying simple grid search and feature selection methods. The results are evaluated on a clinically validated dataset of 30 dysplastic and 30 non-dysplastic VLE images. Optimal classification accuracy is obtained by applying a support vector machine and using our modified Haralick features and optimal image cropping, obtaining an area under the receiver operating characteristic of 0.95 compared to the clinical prediction model at 0.81. Optimal execution time is achieved using a proposed mean and median feature, which is extracted at least factor 2.5 faster than alternative features with comparable performance.

  14. A statistical approach to root system classification

    PubMed Central

    Bodner, Gernot; Leitner, Daniel; Nakhforoosh, Alireza; Sobotik, Monika; Moder, Karl; Kaul, Hans-Peter

    2013-01-01

    Plant root systems have a key role in ecology and agronomy. In spite of fast increase in root studies, still there is no classification that allows distinguishing among distinctive characteristics within the diversity of rooting strategies. Our hypothesis is that a multivariate approach for “plant functional type” identification in ecology can be applied to the classification of root systems. The classification method presented is based on a data-defined statistical procedure without a priori decision on the classifiers. The study demonstrates that principal component based rooting types provide efficient and meaningful multi-trait classifiers. The classification method is exemplified with simulated root architectures and morphological field data. Simulated root architectures showed that morphological attributes with spatial distribution parameters capture most distinctive features within root system diversity. While developmental type (tap vs. shoot-borne systems) is a strong, but coarse classifier, topological traits provide the most detailed differentiation among distinctive groups. Adequacy of commonly available morphologic traits for classification is supported by field data. Rooting types emerging from measured data, mainly distinguished by diameter/weight and density dominated types. Similarity of root systems within distinctive groups was the joint result of phylogenetic relation and environmental as well as human selection pressure. We concluded that the data-define classification is appropriate for integration of knowledge obtained with different root measurement methods and at various scales. Currently root morphology is the most promising basis for classification due to widely used common measurement protocols. To capture details of root diversity efforts in architectural measurement techniques are essential. PMID:23914200

  15. A statistical approach to root system classification.

    PubMed

    Bodner, Gernot; Leitner, Daniel; Nakhforoosh, Alireza; Sobotik, Monika; Moder, Karl; Kaul, Hans-Peter

    2013-01-01

    Plant root systems have a key role in ecology and agronomy. In spite of fast increase in root studies, still there is no classification that allows distinguishing among distinctive characteristics within the diversity of rooting strategies. Our hypothesis is that a multivariate approach for "plant functional type" identification in ecology can be applied to the classification of root systems. The classification method presented is based on a data-defined statistical procedure without a priori decision on the classifiers. The study demonstrates that principal component based rooting types provide efficient and meaningful multi-trait classifiers. The classification method is exemplified with simulated root architectures and morphological field data. Simulated root architectures showed that morphological attributes with spatial distribution parameters capture most distinctive features within root system diversity. While developmental type (tap vs. shoot-borne systems) is a strong, but coarse classifier, topological traits provide the most detailed differentiation among distinctive groups. Adequacy of commonly available morphologic traits for classification is supported by field data. Rooting types emerging from measured data, mainly distinguished by diameter/weight and density dominated types. Similarity of root systems within distinctive groups was the joint result of phylogenetic relation and environmental as well as human selection pressure. We concluded that the data-define classification is appropriate for integration of knowledge obtained with different root measurement methods and at various scales. Currently root morphology is the most promising basis for classification due to widely used common measurement protocols. To capture details of root diversity efforts in architectural measurement techniques are essential.

  16. A new algorithm for ECG interference removal from single channel EMG recording.

    PubMed

    Yazdani, Shayan; Azghani, Mahmood Reza; Sedaaghi, Mohammad Hossein

    2017-09-01

    This paper presents a new method to remove electrocardiogram (ECG) interference from electromyogram (EMG). This interference occurs during the EMG acquisition from trunk muscles. The proposed algorithm employs progressive image denoising (PID) algorithm and ensembles empirical mode decomposition (EEMD) to remove this type of interference. PID is a very recent method that is being used for denoising digital images mixed with white Gaussian noise. It detects white Gaussian noise by deterministic annealing. To the best of our knowledge, PID has never been used before, in the case of EMG and ECG separation or in other 1D signal denoising applications. We have used it according to this fact that amplitude of the EMG signal can be modeled as white Gaussian noise using a filter with time-variant properties. The proposed algorithm has been compared to the other well-known methods such as HPF, EEMD-ICA, Wavelet-ICA and PID. The results show that the proposed algorithm outperforms the others, on the basis of three evaluation criteria used in this paper: Normalized mean square error, Signal to noise ratio and Pearson correlation.

  17. High bandwidth all-optical 3×3 switch based on multimode interference structures

    NASA Astrophysics Data System (ADS)

    Le, Duy-Tien; Truong, Cao-Dung; Le, Trung-Thanh

    2017-03-01

    A high bandwidth all-optical 3×3 switch based on general interference multimode interference (GI-MMI) structure is proposed in this study. Two 3×3 multimode interference couplers are cascaded to realize an all-optical switch operating at both wavelengths of 1550 nm and 1310 nm. Two nonlinear directional couplers at two outer-arms of the structure are used as all-optical phase shifters to achieve all switching states and to control the switching states. Analytical expressions for switching operation using the transfer matrix method are presented. The beam propagation method (BPM) is used to design and optimize the whole structure. The optimal design of the all-optical phase shifters and 3×3 MMI couplers are carried out to reduce the switching power and loss.

  18. DNA barcode analysis: a comparison of phylogenetic and statistical classification methods.

    PubMed

    Austerlitz, Frederic; David, Olivier; Schaeffer, Brigitte; Bleakley, Kevin; Olteanu, Madalina; Leblois, Raphael; Veuille, Michel; Laredo, Catherine

    2009-11-10

    DNA barcoding aims to assign individuals to given species according to their sequence at a small locus, generally part of the CO1 mitochondrial gene. Amongst other issues, this raises the question of how to deal with within-species genetic variability and potential transpecific polymorphism. In this context, we examine several assignation methods belonging to two main categories: (i) phylogenetic methods (neighbour-joining and PhyML) that attempt to account for the genealogical framework of DNA evolution and (ii) supervised classification methods (k-nearest neighbour, CART, random forest and kernel methods). These methods range from basic to elaborate. We investigated the ability of each method to correctly classify query sequences drawn from samples of related species using both simulated and real data. Simulated data sets were generated using coalescent simulations in which we varied the genealogical history, mutation parameter, sample size and number of species. No method was found to be the best in all cases. The simplest method of all, "one nearest neighbour", was found to be the most reliable with respect to changes in the parameters of the data sets. The parameter most influencing the performance of the various methods was molecular diversity of the data. Addition of genetically independent loci--nuclear genes--improved the predictive performance of most methods. The study implies that taxonomists can influence the quality of their analyses either by choosing a method best-adapted to the configuration of their sample, or, given a certain method, increasing the sample size or altering the amount of molecular diversity. This can be achieved either by sequencing more mtDNA or by sequencing additional nuclear genes. In the latter case, they may also have to modify their data analysis method.

  19. Predicting Pharmacodynamic Drug-Drug Interactions through Signaling Propagation Interference on Protein-Protein Interaction Networks.

    PubMed

    Park, Kyunghyun; Kim, Docyong; Ha, Suhyun; Lee, Doheon

    2015-01-01

    As pharmacodynamic drug-drug interactions (PD DDIs) could lead to severe adverse effects in patients, it is important to identify potential PD DDIs in drug development. The signaling starting from drug targets is propagated through protein-protein interaction (PPI) networks. PD DDIs could occur by close interference on the same targets or within the same pathways as well as distant interference through cross-talking pathways. However, most of the previous approaches have considered only close interference by measuring distances between drug targets or comparing target neighbors. We have applied a random walk with restart algorithm to simulate signaling propagation from drug targets in order to capture the possibility of their distant interference. Cross validation with DrugBank and Kyoto Encyclopedia of Genes and Genomes DRUG shows that the proposed method outperforms the previous methods significantly. We also provide a web service with which PD DDIs for drug pairs can be analyzed at http://biosoft.kaist.ac.kr/targetrw.

  20. Theoretical study of hull-rotor aerodynamic interference on semibuoyant vehicles

    NASA Technical Reports Server (NTRS)

    Spangler, S. B.; Smith, C. A.

    1978-01-01

    Analytical methods are developed to predict the pressure distribution and overall loads on the hulls of airships which have close coupled, relatively large and/or high disk loading propulsors for attitude control, station keeping, and partial support of total weight as well as provision of thrust in cruise. The methods comprise a surface-singularity, potential-flow model for the hull and lifting surfaces (such as tails) and a rotor model which calculates the velocity induced by the rotor and its wake at points adjacent to the wake. Use of these two models provides an inviscid pressure distribution on the hull with rotor interference. A boundary layer separation prediction method is used to locate separation on the hull, and a wake pressure is imposed on the separated region for purposes of calculating hull loads. Results of calculations are shown to illustrate various cases of rotor-hull interference and comparisons with small scale data are made to evaluate the method.

  1. Motivation Classification and Grade Prediction for MOOCs Learners

    PubMed Central

    Xu, Bin; Yang, Dan

    2016-01-01

    While MOOCs offer educational data on a new scale, many educators find great potential of the big data including detailed activity records of every learner. A learner's behavior such as if a learner will drop out from the course can be predicted. How to provide an effective, economical, and scalable method to detect cheating on tests such as surrogate exam-taker is a challenging problem. In this paper, we present a grade predicting method that uses student activity features to predict whether a learner may get a certification if he/she takes a test. The method consists of two-step classifications: motivation classification (MC) and grade classification (GC). The MC divides all learners into three groups including certification earning, video watching, and course sampling. The GC then predicts a certification earning learner may or may not obtain a certification. Our experiment shows that the proposed method can fit the classification model at a fine scale and it is possible to find a surrogate exam-taker. PMID:26884747

  2. Motivation Classification and Grade Prediction for MOOCs Learners.

    PubMed

    Xu, Bin; Yang, Dan

    2016-01-01

    While MOOCs offer educational data on a new scale, many educators find great potential of the big data including detailed activity records of every learner. A learner's behavior such as if a learner will drop out from the course can be predicted. How to provide an effective, economical, and scalable method to detect cheating on tests such as surrogate exam-taker is a challenging problem. In this paper, we present a grade predicting method that uses student activity features to predict whether a learner may get a certification if he/she takes a test. The method consists of two-step classifications: motivation classification (MC) and grade classification (GC). The MC divides all learners into three groups including certification earning, video watching, and course sampling. The GC then predicts a certification earning learner may or may not obtain a certification. Our experiment shows that the proposed method can fit the classification model at a fine scale and it is possible to find a surrogate exam-taker.

  3. Brewster-plate spoiler - A novel method for reducing the amplitude of interference fringes that limit tunable-laser absorption sensitivities

    NASA Technical Reports Server (NTRS)

    Webster, C. R.

    1985-01-01

    A simple method is described for substantially reducing the amplitude of interference fringes that limit the sensitivities of tunable-laser high-resolution absorption spectrometers. A lead-salt diode laser operating in the 7-micron region is used with a single Brewster-plate spoiler to reduce the fringe amplitude by a factor of 30 and also to allow the detection of absorptances 0.001 percent in a single laser scan without subtraction techniques, without complex frequency modulation, and without distortion of the molecular line-shape signals. Application to multipass-cell spectrometers is described.

  4. Interference studies with two hospital-grade and two home-grade glucose meters.

    PubMed

    Lyon, Martha E; Baskin, Leland B; Braakman, Sandy; Presti, Steven; Dubois, Jeffrey; Shirey, Terry

    2009-10-01

    Interference studies of four glucose meters (Nova Biomedical [Waltham, MA] StatStrip [hospital grade], Roche Diagnostics [Indianapolis, IN] Accu-Chek Aviva [home grade], Abbott Diabetes Care [Alameda, CA] Precision FreeStyle Freedom [home grade], and LifeScan [Milpitas, CA] SureStep Flexx [hospital grade]) were evaluated and compared to the clinical laboratory plasma hexokinase reference method (Roche Hitachi 912 chemistry analyzer). These meters were chosen to reflect the continuum of care from hospital to home grade meters commonly seen in North America. Within-run precision was determined using a freshly prepared whole blood sample spiked with concentrated glucose to give three glucose concentrations. Day-to-day precision was evaluated using aqueous control materials supplied by each vendor. Common interferences, including hematocrit, maltose, and ascorbate, were tested alone and in combination with one another on each of the four glucose testing devices at three blood glucose concentrations. Within-run precision for all glucose meters was <5% except for the FreeStyle (up to 7.6%). Between-day precision was <6% for all glucose meters. Ascorbate caused differences (percentage change from a sample without added interfering substances) of >5% with pyrroloquinolinequinone (PQQ)-glucose dehydrogenase-based technologies (Aviva and Freestyle) and the glucose oxidase-based Flexx meter. Maltose strongly affected the PQQ-glucose dehydrogenase-based meter systems. When combinations of interferences (ascorbate, maltose, and hematocrit mixtures) were tested, the extent of the interference was up to 193% (Aviva), 179% (FreeStyle), 25.1% (Flexx), and 5.9% (StatStrip). The interference was most pronounced at low glucose (3.9-4.4 mmol/L). All evaluated glucose meter systems demonstrated varying degrees of interference by hematocrit, ascorbate, and maltose mixtures. PQQ-glucose dehydrogenase-based technologies showed greater susceptibility than glucose oxidase-based systems

  5. Apparent hyperthyroidism caused by biotin-like interference from IgM anti-streptavidin antibodies.

    PubMed

    Lam, Leo; Bagg, Warwick; Smith, Geoff; Chiu, Weldon; Middleditch, Martin James; Lim, Julie Ching-Hsia; Kyle, Campbell Vance

    2018-05-29

    Exclusion of analytical interference is important when there is discrepancy between clinical and laboratory findings. However, interferences on immunoassays are often mistaken as isolated laboratory artefacts. We characterized and report the mechanism of a rare cause of interference in two patients that caused erroneous thyroid function tests, and also affects many other biotin dependent immunoassays. Patient 1 was a 77 y female with worsening fatigue while taking carbimazole over several years. Her thyroid function tests however, were not suggestive of hypothyroidism. Patient 2 was a 25 y female also prescribed carbimazole for apparent primary hyperthyroidism. Despite an elevated FT4, the lowest TSH on record was 0.17mIU/L. In both cases, thyroid function tests performed by an alternative method were markedly different. Further characterization of both patients' serum demonstrated analytical interference on many immunoassays using the biotin-streptavidin interaction. Sandwich assays (e.g. TSH, FSH, TNT, beta-HCG) were falsely low, while competitive assays (e.g. FT4, FT3, TBII) were falsely high. Pre-incubation of serum with streptavidin microparticles removed the analytical interference, initially suggesting the cause of interference was biotin, however, neither patient had been taking biotin. Instead, a ~100kDa IgM immunoglobulin with high affinity to streptavidin was isolated from each patient's serum. The findings confirm IgM anti-streptavidin antibodies as the cause of analytical interference. We describe two patients with apparent hyperthyroidism as a result of analytical interference caused by IgM anti-streptavidin antibodies. Analytical interference identified on one immunoassay should raise the possibility of other affected results. Characterization of interference may help to identify other potentially affected immunoassays. In the case of anti-streptavidin antibodies, the pattern of interference mimics that due to biotin ingestion; however, the degree of

  6. Automotive System for Remote Surface Classification.

    PubMed

    Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail

    2017-04-01

    In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions.

  7. Automotive System for Remote Surface Classification

    PubMed Central

    Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail

    2017-01-01

    In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions. PMID:28368297

  8. New Casemix Classification as an Alternative Method for Budget Allocation in Thai Oral Healthcare Service: A Pilot Study

    PubMed Central

    Wisaijohn, Thunthita; Pimkhaokham, Atiphan; Lapying, Phenkhae; Itthichaisri, Chumpot; Pannarunothai, Supasit; Igarashi, Isao; Kawabuchi, Koichi

    2010-01-01

    This study aimed to develop a new casemix classification system as an alternative method for the budget allocation of oral healthcare service (OHCS). Initially, the International Statistical of Diseases and Related Health Problem, 10th revision, Thai Modification (ICD-10-TM) related to OHCS was used for developing the software “Grouper”. This model was designed to allow the translation of dental procedures into eight-digit codes. Multiple regression analysis was used to analyze the relationship between the factors used for developing the model and the resource consumption. Furthermore, the coefficient of variance, reduction in variance, and relative weight (RW) were applied to test the validity. The results demonstrated that 1,624 OHCS classifications, according to the diagnoses and the procedures performed, showed high homogeneity within groups and heterogeneity between groups. Moreover, the RW of the OHCS could be used to predict and control the production costs. In conclusion, this new OHCS casemix classification has a potential use in a global decision making. PMID:20936134

  9. New casemix classification as an alternative method for budget allocation in thai oral healthcare service: a pilot study.

    PubMed

    Wisaijohn, Thunthita; Pimkhaokham, Atiphan; Lapying, Phenkhae; Itthichaisri, Chumpot; Pannarunothai, Supasit; Igarashi, Isao; Kawabuchi, Koichi

    2010-01-01

    This study aimed to develop a new casemix classification system as an alternative method for the budget allocation of oral healthcare service (OHCS). Initially, the International Statistical of Diseases and Related Health Problem, 10th revision, Thai Modification (ICD-10-TM) related to OHCS was used for developing the software "Grouper". This model was designed to allow the translation of dental procedures into eight-digit codes. Multiple regression analysis was used to analyze the relationship between the factors used for developing the model and the resource consumption. Furthermore, the coefficient of variance, reduction in variance, and relative weight (RW) were applied to test the validity. The results demonstrated that 1,624 OHCS classifications, according to the diagnoses and the procedures performed, showed high homogeneity within groups and heterogeneity between groups. Moreover, the RW of the OHCS could be used to predict and control the production costs. In conclusion, this new OHCS casemix classification has a potential use in a global decision making.

  10. Land use/cover classification in the Brazilian Amazon using satellite images.

    PubMed

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant'anna, Sidnei João Siqueira

    2012-09-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data.

  11. Land use/cover classification in the Brazilian Amazon using satellite images

    PubMed Central

    Lu, Dengsheng; Batistella, Mateus; Li, Guiying; Moran, Emilio; Hetrick, Scott; Freitas, Corina da Costa; Dutra, Luciano Vieira; Sant’Anna, Sidnei João Siqueira

    2013-01-01

    Land use/cover classification is one of the most important applications in remote sensing. However, mapping accurate land use/cover spatial distribution is a challenge, particularly in moist tropical regions, due to the complex biophysical environment and limitations of remote sensing data per se. This paper reviews experiments related to land use/cover classification in the Brazilian Amazon for a decade. Through comprehensive analysis of the classification results, it is concluded that spatial information inherent in remote sensing data plays an essential role in improving land use/cover classification. Incorporation of suitable textural images into multispectral bands and use of segmentation-based method are valuable ways to improve land use/cover classification, especially for high spatial resolution images. Data fusion of multi-resolution images within optical sensor data is vital for visual interpretation, but may not improve classification performance. In contrast, integration of optical and radar data did improve classification performance when the proper data fusion method was used. Of the classification algorithms available, the maximum likelihood classifier is still an important method for providing reasonably good accuracy, but nonparametric algorithms, such as classification tree analysis, has the potential to provide better results. However, they often require more time to achieve parametric optimization. Proper use of hierarchical-based methods is fundamental for developing accurate land use/cover classification, mainly from historical remotely sensed data. PMID:24353353

  12. Semi-Supervised Marginal Fisher Analysis for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Huang, H.; Liu, J.; Pan, Y.

    2012-07-01

    The problem of learning with both labeled and unlabeled examples arises frequently in Hyperspectral image (HSI) classification. While marginal Fisher analysis is a supervised method, which cannot be directly applied for Semi-supervised classification. In this paper, we proposed a novel method, called semi-supervised marginal Fisher analysis (SSMFA), to process HSI of natural scenes, which uses a combination of semi-supervised learning and manifold learning. In SSMFA, a new difference-based optimization objective function with unlabeled samples has been designed. SSMFA preserves the manifold structure of labeled and unlabeled samples in addition to separating labeled samples in different classes from each other. The semi-supervised method has an analytic form of the globally optimal solution, and it can be computed based on eigen decomposition. Classification experiments with a challenging HSI task demonstrate that this method outperforms current state-of-the-art HSI-classification methods.

  13. Interference pattern period measurement at picometer level

    NASA Astrophysics Data System (ADS)

    Xiang, Xiansong; Wei, Chunlong; Jia, Wei; Zhou, Changhe; Li, Minkang; Lu, Yancong

    2016-10-01

    To produce large scale gratings by Scanning Beam Interference Lithography (SBIL), a light spot containing grating pattern is generated by two beams interfering, and a scanning stage is used to drive the substrate moving under the light spot. In order to locate the stage at the proper exposure positions, the period of the Interference pattern must be measured accurately. We developed a set of process to obtain the period value of two interfering beams at picometer level. The process includes data acquisition and data analysis. The data is received from a photodiode and a laser interferometer with sub-nanometer resolution. Data analysis differs from conventional analyzing methods like counting wave peaks or using Fourier transform to get the signal period, after a preprocess of filtering and envelope removing, the mean square error is calculated between the received signal and ideal sinusoid waves to find the best-fit frequency, thus an accuracy period value is acquired, this method has a low sensitivity to amplitude noise and a high resolution of frequency. With 405nm laser beams interfering, a pattern period value around 562nm is acquired by employing this process, fitting diagram of the result shows the accuracy of the period value reaches picometer level, which is much higher than the results of conventional methods.

  14. Gas Classification Using Deep Convolutional Neural Networks.

    PubMed

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-08

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP).

  15. Gas Classification Using Deep Convolutional Neural Networks

    PubMed Central

    Peng, Pai; Zhao, Xiaojin; Pan, Xiaofang; Ye, Wenbin

    2018-01-01

    In this work, we propose a novel Deep Convolutional Neural Network (DCNN) tailored for gas classification. Inspired by the great success of DCNN in the field of computer vision, we designed a DCNN with up to 38 layers. In general, the proposed gas neural network, named GasNet, consists of: six convolutional blocks, each block consist of six layers; a pooling layer; and a fully-connected layer. Together, these various layers make up a powerful deep model for gas classification. Experimental results show that the proposed DCNN method is an effective technique for classifying electronic nose data. We also demonstrate that the DCNN method can provide higher classification accuracy than comparable Support Vector Machine (SVM) methods and Multiple Layer Perceptron (MLP). PMID:29316723

  16. Multi-label literature classification based on the Gene Ontology graph.

    PubMed

    Jin, Bo; Muller, Brian; Zhai, Chengxiang; Lu, Xinghua

    2008-12-08

    The Gene Ontology is a controlled vocabulary for representing knowledge related to genes and proteins in a computable form. The current effort of manually annotating proteins with the Gene Ontology is outpaced by the rate of accumulation of biomedical knowledge in literature, which urges the development of text mining approaches to facilitate the process by automatically extracting the Gene Ontology annotation from literature. The task is usually cast as a text classification problem, and contemporary methods are confronted with unbalanced training data and the difficulties associated with multi-label classification. In this research, we investigated the methods of enhancing automatic multi-label classification of biomedical literature by utilizing the structure of the Gene Ontology graph. We have studied three graph-based multi-label classification algorithms, including a novel stochastic algorithm and two top-down hierarchical classification methods for multi-label literature classification. We systematically evaluated and compared these graph-based classification algorithms to a conventional flat multi-label algorithm. The results indicate that, through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods can significantly improve predictions of the Gene Ontology terms implied by the analyzed text. Furthermore, the graph-based multi-label classifiers are capable of suggesting Gene Ontology annotations (to curators) that are closely related to the true annotations even if they fail to predict the true ones directly. A software package implementing the studied algorithms is available for the research community. Through utilizing the information from the structure of the Gene Ontology graph, the graph-based multi-label classification methods have better potential than the conventional flat multi-label classification approach to facilitate protein annotation based on the literature.

  17. Evaluate interference in digital channels

    NASA Technical Reports Server (NTRS)

    Davarian, F.; Sumida, J.

    1985-01-01

    Any future mobile satellite service (MSS) which is to provide simultaneous mobile communications for a large number of users will have to make very efficient use of the spectrum. As the spectrum available for an MSS is limited, the system's channels should be packed as closely together as possible, with minimum-width guard bands. In addition the employment of frequency reuse schemes is an important factor. Difficulties regarding these solutions are related to the introduction of interference in the link. A balance must be achieved between the competing aims of spectrum conservation and low interference. While the interference phenomenon in narrowband FM voice channels is reasonably well understood, very little effort, however, has been devoted to the problem in digital radios. Attention is given to work, which illuminates the effects of cochannel and adjacent channel interference on digital FM (FSK) radios.

  18. Validation of AN Hplc-Dad Method for the Classification of Green Teas

    NASA Astrophysics Data System (ADS)

    Yu, Jingbo; Ye, Nengsheng; Gu, Xuexin; Liu, Ni

    A reversed phase high performance liquid chromatography (RP-HPLC) separation coupled with diode array detection (DAD) and electrospray ionization mass spectrometer (ESI/MS) was developed and optimized for the classification of green teas. Five catechins [epigallocatechin (EGC), epigallocatechin gallate (EGCG), epicatechin (EC), gallocatechin gallate (GCG), epicatechin gallate (ECG)] had been identified and quantified by the HPLC-DAD-ESI/MS/MS method. The limit of detection (LOD) of five catechins was within the range of 1.25-15 ng. All the analytes exhibited good linearity up to 2500 ng. These compounds were considered as chemical descriptors to define groups of green teas. Chemometric methods including principal component analysis (PCA) and hierarchical cluster analysis (HCA) were applied for the purpose. Twelve green tea samples originating from different regions were subjected to reveal the natural groups. The results showed that the analyzed green teas were differentiated mainly by provenance; HCA afforded an excellent performance in terms of recognition and prediction abilities. This method was accurate and reproducible, providing a potential approach for authentication of green teas.

  19. Comparison of different landform classification methods for digital landform and soil mapping of the Iranian loess plateau

    NASA Astrophysics Data System (ADS)

    Hoffmeister, Dirk; Kramm, Tanja; Curdt, Constanze; Maleki, Sedigheh; Khormali, Farhad; Kehl, Martin

    2016-04-01

    The Iranian loess plateau is covered by loess deposits, up to 70 m thick. Tectonic uplift triggered deep erosion and valley incision into the loess and underlying marine deposits. Soil development strongly relates to the aspect of these incised slopes, because on northern slopes vegetation protects the soil surface against erosion and facilitates formation and preservation of a Cambisol, whereas on south-facing slopes soils were probably eroded and weakly developed Entisols formed. While the whole area is intensively stocked with sheep and goat, rain-fed cropping of winter wheat is practiced on the valley floors. Most time of the year, the soil surface is unprotected against rainfall, which is one of the factors promoting soil erosion and serious flooding. However, little information is available on soil distribution, plant cover and the geomorphological evolution of the plateau, as well as on potentials and problems in land use. Thus, digital landform and soil mapping is needed. As a requirement of digital landform and soil mapping, four different landform classification methods were compared and evaluated. These geomorphometric classifications were run on two different scales. On the whole area an ASTER GDEM and SRTM dataset (30 m pixel resolution) was used. Likewise, two high-resolution digital elevation models were derived from Pléiades satellite stereo-imagery (< 1m pixel resolution, 10 by 10 km). The high-resolution information of this dataset was aggregated to datasets of 5 and 10 m scale. The applied classification methods are the Geomorphons approach, an object-based image approach, the topographical position index and a mainly slope based approach. The accuracy of the classification was checked with a location related image dataset obtained in a field survey (n ~ 150) in September 2015. The accuracy of the DEMs was compared to measured DGPS trenches and map-based elevation data. The overall derived accuracy of the landform classification based on the

  20. Micro-Doppler Ambiguity Resolution for Wideband Terahertz Radar Using Intra-Pulse Interference

    PubMed Central

    Yang, Qi; Qin, Yuliang; Deng, Bin; Wang, Hongqiang; You, Peng

    2017-01-01

    Micro-Doppler, induced by micro-motion of targets, is an important characteristic of target recognition once extracted via parameter estimation methods. However, micro-Doppler is usually too significant to result in ambiguity in the terahertz band because of its relatively high carrier frequency. Thus, a micro-Doppler ambiguity resolution method for wideband terahertz radar using intra-pulse interference is proposed in this paper. The micro-Doppler can be reduced several dozen times its true value to avoid ambiguity through intra-pulse interference processing. The effectiveness of this method is proved by experiments based on a 0.22 THz wideband radar system, and its high estimation precision and excellent noise immunity are verified by Monte Carlo simulation. PMID:28468257